FP8-Dynamic quant to support Ampere cards.

Use following vllm command to run on 2x 3090

vllm serve khajaphysist/Qwen3-30B-A3B-FP8-Dynamic --enable-reasoning --reasoning-parser deepseek_r1 \
     -tp 2 --gpu-memory-utilization 0.99 --disable-log-requests --enforce-eager --max-num-seqs 15
Downloads last month
83
Safetensors
Model size
30.6B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for khajaphysist/Qwen3-30B-A3B-FP8-Dynamic

Finetuned
Qwen/Qwen3-30B-A3B
Quantized
(27)
this model