stduhpf commited on
Commit
9613d4d
·
verified ·
1 Parent(s): 28c1751

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -13,13 +13,13 @@ The official QAT weights released by google use fp16 (instead of Q6_K) for the e
13
 
14
  Here are some perplexity measurements:
15
 
16
- | Model | File size ↓ | PPL (wiki.text.raw) ↓ |
17
- | --- | --- | --- |
18
- | [iQ3_xs (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-IQ3_XS.gguf) | 5.21 GB |10.0755 +/- 0.08024 |
19
- | [This model](https://huggingface.co/stduhpf/google-gemma-3-12b-it-qat-q4_0-gguf-small/blob/main/gemma-3-12b-it-q4_0_s.gguf) | 6.89 GB | 9.2637 +/- 0.07216 |
20
- | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q4_0.gguf) | 6.91 GB | 9.5589 +/- 0.07527 |
21
- | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf/blob/main/gemma-3-12b-it-q4_0.gguf) | 8.07 GB | 9.2565 +/- 0.07212 |
22
- | [Q5_K_S (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q5_K_S.gguf) | 8.23 GB | 9.8540 +/- 0.08016 |
23
 
24
  Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.
25
  I don't understand why Q5_K_S is performing worse on that test than the default Q4_0, I wasn't expecting this outcome.
 
13
 
14
  Here are some perplexity measurements:
15
 
16
+ | Model | File size ↓ | PPL (wiki.text.raw) ↓ | Hellaswag, 4k tasks ↑ |
17
+ | --- | --- | --- | --- |
18
+ | [iQ3_xs (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-IQ3_XS.gguf) | 5.21 GB |10.0755 +/- 0.08024 | --- |
19
+ | [This model](https://huggingface.co/stduhpf/google-gemma-3-12b-it-qat-q4_0-gguf-small/blob/main/gemma-3-12b-it-q4_0_s.gguf) | 6.89 GB | 9.2637 +/- 0.07216 | 72.925% [71.5366M, 74.2794%] |
20
+ | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q4_0.gguf) | 6.91 GB | 9.5589 +/- 0.07527 |73.125% [71.7295%, 74.4761%] |
21
+ | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf/blob/main/gemma-3-12b-it-q4_0.gguf) | 8.07 GB | 9.2565 +/- 0.07212 | 72.850% [71.4505%, 74.2056%] |
22
+ | [Q5_K_S (bartowski)](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q5_K_S.gguf) | 8.23 GB | 9.8540 +/- 0.08016 | --- |
23
 
24
  Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.
25
  I don't understand why Q5_K_S is performing worse on that test than the default Q4_0, I wasn't expecting this outcome.