Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,10 @@ Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphi
|
|
34 |
*Current [Quantizations](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-GGUF)*
|
35 |
- Q4_K_M
|
36 |
- Q5_K_M
|
37 |
-
|
|
|
|
|
|
|
38 |
## Code Example
|
39 |
Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly
|
40 |
|
|
|
34 |
*Current [Quantizations](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-GGUF)*
|
35 |
- Q4_K_M
|
36 |
- Q5_K_M
|
37 |
+
|
38 |
+
GGUF chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat-GGUF)
|
39 |
+
4-bit bnb chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat)
|
40 |
+
|
41 |
## Code Example
|
42 |
Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly
|
43 |
|