Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,51 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
tags:
|
4 |
- text-generation-inference
|
5 |
- transformers
|
6 |
- unsloth
|
7 |
-
-
|
|
|
8 |
license: apache-2.0
|
9 |
language:
|
10 |
- en
|
11 |
---
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
-
- **License:** apache-2.0
|
17 |
-
- **Finetuned from model :** unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit
|
18 |
|
19 |
-
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
|
3 |
tags:
|
4 |
- text-generation-inference
|
5 |
- transformers
|
6 |
- unsloth
|
7 |
+
- mistral
|
8 |
+
- trl
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
12 |
---
|
13 |
|
14 |
+

|
15 |
|
16 |
+
# Eurydice 24b v1 🧙♂️
|
|
|
|
|
17 |
|
|
|
18 |
|
19 |
+
Eurydice 24b v1 is designed to be the perfect companion for multi-role conversations. It demonstrates exceptional contextual understanding and excels in creativity and storytelling. Built on Mistral 3.1, this model has been trained on a custom dataset specifically crafted to enhance its capabilities.
|
20 |
+
|
21 |
+
## Model Details 📊
|
22 |
+
|
23 |
+
- **Developed by:** Aixon Lab
|
24 |
+
- **Model type:** Causal Language Model
|
25 |
+
- **Language(s):** English (primarily), may support other languages
|
26 |
+
- **License:** Apache 2.0
|
27 |
+
- **Repository:** https://huggingface.co/aixonlab/Eurydice-24b-v1
|
28 |
+
|
29 |
+
## Quantization
|
30 |
+
- **GGUF:** https://huggingface.co/mradermacher/Eurydice-24b-v1-GGUF
|
31 |
+
|
32 |
+
## Model Architecture 🏗️
|
33 |
+
|
34 |
+
- **Base model:** mistralai/Mistral-Small-3.1-24B-Instruct-2503
|
35 |
+
- **Parameter count:** ~24 billion
|
36 |
+
- **Architecture specifics:** Transformer-based language model
|
37 |
+
|
38 |
+
## Intended Use 🎯
|
39 |
+
As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.
|
40 |
+
|
41 |
+
## Ethical Considerations 🤔
|
42 |
+
As a model based on multiple sources, Eurydice 24b v1 may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.
|
43 |
+
|
44 |
+
## Performance and Evaluation
|
45 |
+
Performance metrics and evaluation results for Eurydice 24b v1 are yet to be determined. Users are encouraged to contribute their findings and benchmarks.
|
46 |
+
|
47 |
+
## Limitations and Biases
|
48 |
+
The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.
|
49 |
+
|
50 |
+
## Additional Information
|
51 |
+
For more details on the base model and constituent models, please refer to their respective model cards and documentation.
|