Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ base_model:
|
|
9 |
|
10 |
This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.
|
11 |
|
12 |
-
The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models
|
13 |
|
14 |
Here are some perplexity measurements:
|
15 |
|
|
|
9 |
|
10 |
This is a "self" merge of https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF.
|
11 |
|
12 |
+
The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models because I thought using imatrix quants would give better quality (it doesn't, imatrix isn't used for token embeddings).
|
13 |
|
14 |
Here are some perplexity measurements:
|
15 |
|