Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ dataset for this model. This results in a DPO dataset composed by triplets < ”
|
|
27 |
|
28 |
### General Purpose Performance
|
29 |
|
30 |
-
| | OpenLLM Leaderboard (Average) ↑ | MMLU (ROUGE1) ↑ |
|
31 |
|------------------------------|:---------------------:|:---------------:|
|
32 |
| Meta-Llama-3.1-8B-Instruct | 0.453 | 0.646 |
|
33 |
| Meta-Llama-3.1-8B-Egida-DPO | 0.453 | 0.643 |
|
|
|
27 |
|
28 |
### General Purpose Performance
|
29 |
|
30 |
+
| | OpenLLM Leaderboard (Average) ↑ | MMLU Generative (ROUGE1) ↑ |
|
31 |
|------------------------------|:---------------------:|:---------------:|
|
32 |
| Meta-Llama-3.1-8B-Instruct | 0.453 | 0.646 |
|
33 |
| Meta-Llama-3.1-8B-Egida-DPO | 0.453 | 0.643 |
|