BAGEL-7B-MoT-DF11 / README.md
LeanQuant's picture
Fix typo in README
4dc3e3e verified
metadata
base_model:
  - ByteDance-Seed/BAGEL-7B-MoT
base_model_relation: quantized
pipeline_tag: any-to-any
tags:
  - dfloat11
  - df11
  - lossless compression
  - 70% size, 100% accuracy

DFloat11 Compressed Model: ByteDance-Seed/BAGEL-7B-MoT

This model uses DFloat11 lossless compression. It's 32% smaller than the original BFloat16 model, yet produces bit-identical outputs and runs efficiently on GPUs.

πŸ“Š Performance Comparison

Metric BAGEL-7B-MoT (BFloat16) BAGEL-7B-MoT (DFloat11)
Model Size 29.21 GB 19.89 GB
Peak GPU Memory
(1024x1024 image generation)
30.07 GB 21.76 GB
Generation Time
(on an A100 GPU)
54 seconds 58 seconds

πŸ” How It Works

We apply Huffman coding to the exponent bits of BFloat16 model weights, which are highly compressible. We leverage hardware-aware algorithmic designs to enable highly efficient, on-the-fly weight decompression directly on the GPU. Find out more in our research paper.

πŸ”§ How to Use

A complete usage guide is available in our GitHub repository (forked from the official Bagel repository): https://github.com/LeanModels/Bagel-DFloat11.

πŸ“„ Learn More