GGUF
Not-For-All-Audiences
GGUF
Merge
iMat
conversational
InferenceIllusionist commited on
Commit
e6458fa
·
verified ·
1 Parent(s): e45f3ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ PROUDLY PRESENTS
28
  # Phainesthesia-8x7B-iMat-GGUF
29
 
30
  Quantized from fp16 with love.
31
- * Quantizations made possible using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
32
 
33
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
34
 
 
28
  # Phainesthesia-8x7B-iMat-GGUF
29
 
30
  Quantized from fp16 with love.
31
+ * Quantizations made possible using mixtral-8x7b-instruct-v0.1.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
32
 
33
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
34