Update README.md
Browse files
README.md
CHANGED
@@ -14,16 +14,22 @@ tags:
|
|
14 |
|
15 |
This is a [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") and [LoRA+](https://arxiv.org/abs/2402.12354 "LoRA+: Efficient Low Rank Adaptation of Large Models") fine-tuned danube2 base model. It uses the ChatML template and was trained on the [openhermes-unfiltered](https://huggingface.co/datasets/Crystalcareai/openhermes_200k_unfiltered).
|
16 |
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Template
|
19 |
|
20 |
```jinja
|
21 |
-
<|im_start
|
22 |
{{instruction}}<|im_end|>
|
23 |
-
<|im_start
|
24 |
-
{{response}}<|im_end
|
25 |
```
|
26 |
|
|
|
27 |
## BAdam
|
28 |
|
29 |
**System:** You are a helpful assistant.
|
|
|
14 |
|
15 |
This is a [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") and [LoRA+](https://arxiv.org/abs/2402.12354 "LoRA+: Efficient Low Rank Adaptation of Large Models") fine-tuned danube2 base model. It uses the ChatML template and was trained on the [openhermes-unfiltered](https://huggingface.co/datasets/Crystalcareai/openhermes_200k_unfiltered).
|
16 |
|
17 |
+
## Quants
|
18 |
+
|
19 |
+
Thank you [mradermacher](https://huggingface.co/mradermacher)!
|
20 |
+
|
21 |
+
- [mradermacher/danube2-1.8b-openhermes-GGUF](https://huggingface.co/mradermacher/danube2-1.8b-openhermes-GGUF)
|
22 |
|
23 |
## Template
|
24 |
|
25 |
```jinja
|
26 |
+
<|im_start|>user
|
27 |
{{instruction}}<|im_end|>
|
28 |
+
<|im_start|>assistant
|
29 |
+
{{response}}<|im_end|>
|
30 |
```
|
31 |
|
32 |
+
|
33 |
## BAdam
|
34 |
|
35 |
**System:** You are a helpful assistant.
|