license: gemma
metrics:
- perplexity
base_model:
- bartowski/google_gemma-3-12b-it-GGUF
- google/gemma-3-12b-it-qat-q4_0-gguf
This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.
The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.
Here are some perplexity measurements:
Model | File size ↓ | PPL (wiki.text.raw) ↓ |
---|---|---|
iQ3_xs (bartowski) | 5.21 GB | 10.0755 +/- 0.08024 |
This model | 6.89 GB | 9.2637 +/- 0.07216 |
Q4_0 (bartowski) | 6.91 GB | 9.5589 +/- 0.07527 |
QAT Q4_0 (google) | 8.07 GB | 9.2565 +/- 0.07212 |
Q5_K_S (bartowski) | 8.23 GB | 9.8540 +/- 0.08016 |
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0, but Google decided to use only Q4_0 instead, which is slightly smaller. I don't understand why Q5_K_S is performing worse on that test than the default Q4_0, I wasn't expecting this outcome. This merge seems to be a good balance between model size and perplexity. I believe this is representative to the overall quality of the model.