stduhpf commited on
Commit
32e0e37
·
verified ·
1 Parent(s): 9613d4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ base_model:
9
 
10
  This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.
11
 
12
- The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.
13
 
14
  Here are some perplexity measurements:
15
 
 
9
 
10
  This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.
11
 
12
+ The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models because I thought using imatrix quants would give better quality (it doesn't, imatrix isn't used for token embeddings).
13
 
14
  Here are some perplexity measurements:
15