Produced by Antigma Labs, Antigma Quantize Space

Follow Antigma Labs in X https://x.com/antigma_labs

Antigma's GitHub Homepage https://github.com/AntigmaLabs

llama.cpp quantization

Using llama.cpp release b4944 for quantization. Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B Run them directly with llama.cpp, or any other llama.cpp based project

Prompt format

<๏ฝœbeginโ–ofโ–sentence๏ฝœ>{system_prompt}<๏ฝœUser๏ฝœ>{prompt}<๏ฝœAssistant๏ฝœ><๏ฝœendโ–ofโ–sentence๏ฝœ><๏ฝœAssistant๏ฝœ>

Download a file (not the whole branch) from below:

Filename Quant type File Size Split
qwen3-30b-a3b-q4_k_m.gguf Q4_K_M 17.28 GB False
qwen3-30b-a3b-q4_0.gguf Q4_0 16.12 GB False
qwen3-30b-a3b-q4_k_s.gguf Q4_K_S 16.26 GB False

Downloading using huggingface-cli

Click to view download instructions First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf/*" --local-dir ./

You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)

Downloads last month
13
GGUF
Model size
30.5B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Antigma/Qwen3-30B-A3B-GGUF

Finetuned
Qwen/Qwen3-30B-A3B
Quantized
(27)
this model