base model reference added.
Browse files
README.md
CHANGED
@@ -1,3 +1,7 @@
|
|
|
|
|
|
|
|
|
|
1 |
This model the 3-bit quantized version of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:
|
2 |
|
3 |
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- mistralai/Ministral-8B-Instruct-2410
|
4 |
+
---
|
5 |
This model the 3-bit quantized version of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:
|
6 |
|
7 |
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference
|