DFloat11 Compressed Model: mistralai/Mistral-Nemo-Instruct-2407
This is a losslessly compressed version of mistralai/Mistral-Nemo-Instruct-2407
using our custom DFloat11 format. The outputs of this compressed model are bit-for-bit identical to the original BFloat16 model, while reducing GPU memory consumption by approximately 30%.
π How It Works
DFloat11 compresses model weights using Huffman coding of BFloat16 exponent bits, combined with hardware-aware algorithmic designs that enable efficient on-the-fly decompression directly on the GPU. During inference, the weights remain compressed in GPU memory and are decompressed just before matrix multiplications, then immediately discarded after use to minimize memory footprint.
Key benefits:
- No CPU decompression or host-device data transfer -- all operations are handled entirely on the GPU.
- Decompression overhead is constant per forward pass and independent of batch size, making DFloat11 increasingly efficient at larger batch sizes.
- DFloat11 is much faster than CPU-offloading approaches, enabling practical deployment in memory-constrained environments.
- At batch size = 1, inference is approximately 2Γ slower than the original BF16 model, but the performance gap narrows significantly with larger batches.
- The compression is fully lossless, guaranteeing that the modelβs outputs are bit-for-bit identical to those of the original model.
π§ How to Use
Install the DFloat11 pip package (installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed):
pip install -U dfloat11[cuda12] # or if you have CUDA version 11: # pip install -U dfloat11[cuda11]
To use the DFloat11 model, run the following example code in Python:
import torch from dfloat11 import DFloat11Model from transformers import AutoTokenizer model_id = "DFloat11/Mistral-Nemo-Instruct-2407-DF11" model = DFloat11Model.from_pretrained(model_id, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token prompt = "Question: What is a binary tree and its applications? Answer:" inputs = tokenizer(prompt, return_tensors="pt", padding=True).to(model.device) with torch.no_grad(): output = model.generate( **inputs, max_new_tokens=256, do_sample=True, ) print(tokenizer.batch_decode(output, skip_special_tokens=True))
π Learn More
- Downloads last month
- 10
Model tree for DFloat11/Mistral-Nemo-Instruct-2407-DF11
Base model
mistralai/Mistral-Nemo-Base-2407