DFloat11 Compressed Model: Wan-AI/Wan2.1-T2V-14B-Diffusers

This model uses DFloat11 lossless compression. It's 30% smaller than the original BFloat16 model, yet produces bit-identical outputs and runs efficiently on GPUs.

πŸ“Š Performance Comparison

Metric Wan2.1-T2V-14B (BFloat16) Wan2.1-T2V-14B (DFloat11)
Model Size 28.64 GB 19.39 GB
Peak GPU Memory
(2s 480p Video)
30.79 GB 22.22 GB
Generation Time
(an A100 GPU)
339 seconds 348 seconds

πŸ” How It Works

We apply Huffman coding to the exponent bits of BFloat16 model weights, which are highly compressible. We leverage hardware-aware algorithmic designs to enable highly efficient, on-the-fly weight decompression directly on the GPU. Find out more in our research paper.

πŸ”§ How to Use

A complete usage guide is available in our GitHub repository: https://github.com/LeanModels/DFloat11/tree/master/examples/wan2.1.

πŸ“„ Learn More

Downloads last month
271
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DFloat11/Wan2.1-T2V-14B-Diffusers-DF11

Quantized
(1)
this model