File size: 1,834 Bytes
cdd716c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
base_model:
  - Wan-AI/Wan2.1-T2V-14B-Diffusers
base_model_relation: quantized
pipeline_tag: text-to-image
tags:
- dfloat11
- df11
- lossless compression
- 70% size, 100% accuracy
---

# DFloat11 Compressed Model: `Wan-AI/Wan2.1-T2V-14B-Diffusers`

This model uses **DFloat11** lossless compression. It's 30% smaller than the original BFloat16 model, yet produces bit-identical outputs and runs efficiently on GPUs.

### πŸ“Š Performance Comparison

| Metric                             | Wan2.1-T2V-14B (BFloat16) | Wan2.1-T2V-14B (DFloat11) |
| ---------------------------------- | ------------------------- | ------------------------- |
| Model Size                         | 28.64 GB                  | 19.39 GB                  |
| Peak GPU Memory<br>(2s 480p Video) | 30.79 GB                  | 22.22 GB                  |
| Generation Time<br>(an A100 GPU)   | 339 seconds               | 348 seconds               |

### πŸ” How It Works

We apply Huffman coding to the exponent bits of BFloat16 model weights, which are highly compressible. We leverage hardware-aware algorithmic designs to enable highly efficient, on-the-fly weight decompression directly on the GPU. Find out more in our [research paper](https://arxiv.org/abs/2504.11651).

### πŸ”§ How to Use

A complete usage guide is available in our GitHub repository: [https://github.com/LeanModels/DFloat11/tree/master/examples/wan2.1](https://github.com/LeanModels/DFloat11/tree/master/examples/wan2.1).

### πŸ“„ Learn More

* **Paper**: [70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float](https://arxiv.org/abs/2504.11651)
* **GitHub**: [https://github.com/LeanModels/DFloat11](https://github.com/LeanModels/DFloat11)
* **HuggingFace**: [https://huggingface.co/DFloat11](https://huggingface.co/DFloat11)