W4A16 quantization using llmcompressor. Run with:

vllm serve leon-se/gemma-3-27b-it-qat-W4A16-G128 --max-model-len 4096 --max-num-seqs 1
Downloads last month
151
Safetensors
Model size
6.64B params
Tensor type
I64
I32
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for leon-se/gemma-3-27b-it-qat-W4A16-G128

Quantized
(8)
this model