Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ PROUDLY PRESENTS
|
|
28 |
# Phainesthesia-8x7B-iMat-GGUF
|
29 |
|
30 |
Quantized from fp16 with love.
|
31 |
-
* Quantizations made possible using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
|
32 |
|
33 |
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
|
34 |
|
|
|
28 |
# Phainesthesia-8x7B-iMat-GGUF
|
29 |
|
30 |
Quantized from fp16 with love.
|
31 |
+
* Quantizations made possible using mixtral-8x7b-instruct-v0.1.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
|
32 |
|
33 |
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
|
34 |
|