FP8-Dynamic quant to support Ampere cards

Use following vllm command to run on 2x 3090

vllm serve khajaphysist/Qwen3-32B-FP8-Dynamic --enable-reasoning --reasoning-parser deepseek_r1 \
     -tp 2 --gpu-memory-utilization 0.99 --disable-log-requests --enforce-eager --max-num-seqs 15
Downloads last month
77
Safetensors
Model size
32.8B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for khajaphysist/Qwen3-32B-FP8-Dynamic

Base model

Qwen/Qwen3-32B
Quantized
(33)
this model