DFloat11 Compressed Model: Qwen/Qwen3-4B

This is a losslessly compressed version of Qwen/Qwen3-4B using our custom DFloat11 format. The outputs of this compressed model are bit-for-bit identical to the original BFloat16 model, while reducing GPU memory consumption by approximately 30%.

πŸ” How It Works

DFloat11 compresses model weights using Huffman coding of BFloat16 exponent bits, combined with hardware-aware algorithmic designs that enable efficient on-the-fly decompression directly on the GPU. During inference, the weights remain compressed in GPU memory and are decompressed just before matrix multiplications, then immediately discarded after use to minimize memory footprint.

Key benefits:

  • No CPU decompression or host-device data transfer -- all operations are handled entirely on the GPU.
  • Decompression overhead is constant per forward pass and independent of batch size, making DFloat11 increasingly efficient at larger batch sizes.
  • DFloat11 is much faster than CPU-offloading approaches, enabling practical deployment in memory-constrained environments.
  • At batch size = 1, inference is approximately 2Γ— slower than the original BF16 model, but the performance gap narrows significantly with larger batches.
  • The compression is fully lossless, guaranteeing that the model’s outputs are bit-for-bit identical to those of the original model.

πŸ”§ How to Use

  1. Install the DFloat11 pip package (installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed):

    pip install dfloat11[cuda12]
    # or if you have CUDA version 11:
    # pip install dfloat11[cuda11]
    
  2. To use the DFloat11 model, run the following example code in Python:

    import torch
    from dfloat11 import DFloat11Model
    from transformers import AutoTokenizer
    
    model_id = "DFloat11/Qwen3-4B-DF11"
    
    model = DFloat11Model.from_pretrained(model_id, device_map="auto")
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token
    
    prompt = "Question: What is a binary tree and its applications? Answer:"
    inputs = tokenizer(prompt, return_tensors="pt", padding=True).to(model.device)
    
    with torch.no_grad():
        output = model.generate(
            **inputs,
            max_new_tokens=256,
            do_sample=True,
        )
    
    print(tokenizer.batch_decode(output, skip_special_tokens=True))
    

πŸ“„ Learn More

Downloads last month
54
Safetensors
Model size
389M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DFloat11/Qwen3-4B-DF11

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(77)
this model

Collection including DFloat11/Qwen3-4B-DF11