Update README.md
Browse files
README.md
CHANGED
@@ -83,6 +83,22 @@ print(generate_response(prompt), "\n")
|
|
83 |
|
84 |
## Eval
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
evaluation [colab](https://colab.research.google.com/drive/1FpwgsGzCR4tORTxAwUxpN3PcP22En2xk?usp=sharing)
|
87 |
## Summary of previous evaluation
|
88 |
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|
|
|
83 |
|
84 |
## Eval
|
85 |
|
86 |
+
## EQ Bench
|
87 |
+
|
88 |
+
**Evaluated in 4bit**
|
89 |
+
----Benchmark Complete----
|
90 |
+
+ 2024-01-24 00:37:10
|
91 |
+
+ Time taken: 20.9 mins
|
92 |
+
+ Prompt Format: Mistral
|
93 |
+
+ Model: macadeliccc/laser-dolphin-mixtral-2x7b-dpo
|
94 |
+
+ Score (v2): 70.52
|
95 |
+
+ Parseable: 169.0
|
96 |
+
---------------
|
97 |
+
Batch completed
|
98 |
+
Time taken: 20.9 mins
|
99 |
+
---------------
|
100 |
+
|
101 |
+
|
102 |
evaluation [colab](https://colab.research.google.com/drive/1FpwgsGzCR4tORTxAwUxpN3PcP22En2xk?usp=sharing)
|
103 |
## Summary of previous evaluation
|
104 |
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|