DFloat11 Compressed Model: ByteDance-Seed/BAGEL-7B-MoT

This model uses DFloat11 lossless compression. It's 32% smaller than the original BFloat16 model, yet produces bit-identical outputs and runs efficiently on GPUs.

πŸ“Š Performance Comparison

Metric BAGEL-7B-MoT (BFloat16) BAGEL-7B-MoT (DFloat11)
Model Size 29.21 GB 19.89 GB
Peak GPU Memory
(1024x1024 image generation)
30.07 GB 21.76 GB
Generation Time
(on an A100 GPU)
54 seconds 58 seconds

πŸ” How It Works

We apply Huffman coding to the exponent bits of BFloat16 model weights, which are highly compressible. We leverage hardware-aware algorithmic designs to enable highly efficient, on-the-fly weight decompression directly on the GPU. Find out more in our research paper.

πŸ”§ How to Use

A complete usage guide is available in our GitHub repository (forked from the official Bagel repository): https://github.com/LeanModels/Bagel-DFloat11.

πŸ“„ Learn More

Downloads last month
8
Safetensors
Model size
63.5M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DFloat11/BAGEL-7B-MoT-DF11

Base model

Qwen/Qwen2.5-7B
Quantized
(3)
this model