Load Diffusion Model INT8 (W8A8)

OTUNetLoaderW8A8

Load INT8 tensorwise quantized models with fast torch._int_mm inference.

Pack: ComfyUI-Flux2-INT8

custom_nodes.ComfyUI-Flux2-INT8

Inputs (4)

NameTypeRequired
unet_nameCOMBOrequired
weight_dtypeCOMBOrequired
model_typeCOMBOrequired
on_the_fly_quantizationBOOLEANrequired

Outputs (1)

NameType
MODELMODEL
Load Diffusion Model INT8 (W8A8) | TealPug Node Explorer