FP8 quantized version of AuraFlow v0.3
Just casted to torch.float8_e4m3fn
all linear weights of the flow transformer except t_embedder
, final_linear
, modF
.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for p1atdev/AuraFlow-v0.3-fp8
Base model
fal/AuraFlow-v0.3