GGUF
Not-For-All-Audiences
GGUF
Merge
iMat
conversational
InferenceIllusionist commited on
Commit
1f64902
·
verified ·
1 Parent(s): 2c1a1e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -27,12 +27,17 @@ PROUDLY PRESENTS
27
 
28
  # Phainesthesia-8x7B-iMat-GGUF
29
 
 
 
 
30
  Quantized from fp16 with love.
31
  * Quantizations made possible using mixtral-8x7b-instruct-v0.1.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
32
 
33
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
34
 
35
- <i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
 
 
36
 
37
  Please note importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
38
 
 
27
 
28
  # Phainesthesia-8x7B-iMat-GGUF
29
 
30
+
31
+ <b>Update 4/28 This specific model was retested once more after llama.cpp [PR 6884](https://github.com/ggerganov/llama.cpp/issues/6841) was merged. It has been confirmed to pass the recently added --check-tensors test.</b>
32
+
33
  Quantized from fp16 with love.
34
  * Quantizations made possible using mixtral-8x7b-instruct-v0.1.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
35
 
36
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
37
 
38
+ <i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
39
+
40
+
41
 
42
  Please note importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
43