FP8 quantized version of AuraFlow v0.3

Just casted to torch.float8_e4m3fn all linear weights of the flow transformer except t_embedder, final_linear, modF.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for p1atdev/AuraFlow-v0.3-fp8

Base model

fal/AuraFlow-v0.3
Quantized
(4)
this model