Flux2TTRTrainer

Flux2TTRTrainer

advanced/attentionexperimental

Phase-1: distill Flux TTR linear attention modules from native attention.

Pack: ComfyUI-Taylor-Attention

custom_nodes.ComfyUI-Taylor-Attention

Inputs (31)

NameTypeRequired
modelMODELrequired
latentsLATENTrequired
conditioningCONDITIONINGrequired
stepsINTrequired
trainingBOOLEANrequired
training_preview_ttrBOOLEANrequired
checkpoint_pathSTRINGrequired
feature_dimINTrequired
query_chunk_sizeINTrequired
key_chunk_sizeINTrequired
landmark_fractionFLOATrequired
landmark_minINTrequired
landmark_maxINTrequired
text_tokens_guessINTrequired
alpha_initFLOATrequired
training_query_token_capINTrequired
replay_buffer_sizeINTrequired
replay_offload_cpuBOOLEANrequired
replay_max_mbINTrequired
train_steps_per_callINTrequired
readiness_thresholdFLOATrequired
readiness_min_updatesINTrequired
enable_memory_reserveBOOLEANrequired
layer_startINTrequired
layer_endINTrequired
cfg_scaleFLOATrequired
min_swap_layersINTrequired
max_swap_layersINTrequired
inference_mixed_precisionBOOLEANrequired
controller_checkpoint_pathSTRINGrequired
training_configTTR_TRAINING_CONFIGoptional

Outputs (2)

NameType
MODELMODEL
loss_valueFLOAT