prithivMLmods commited on
Commit
a6de899
·
verified ·
1 Parent(s): 12db607

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (493c88d6fd7f6bb8710a66cc49f0846a455b456b)

Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -14,6 +14,105 @@ tags:
14
  - sft
15
  - code
16
  - math
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
  ![ccccccccccccc.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ii0oEprS2lm6Zoama7CPe.png)
19
 
@@ -100,4 +199,18 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
100
  Small errors in early steps may compound in multi-step proofs and long-form mathematical derivations.
101
 
102
  5. **Prompt Sensitivity**:
103
- The quality of responses depends on how well the problem is structured and framed within the input prompt.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  - sft
15
  - code
16
  - math
17
+ model-index:
18
+ - name: Gauss-Opus-14B-R999
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ name: Text Generation
23
+ dataset:
24
+ name: IFEval (0-Shot)
25
+ type: wis-k/instruction-following-eval
26
+ split: train
27
+ args:
28
+ num_few_shot: 0
29
+ metrics:
30
+ - type: inst_level_strict_acc and prompt_level_strict_acc
31
+ value: 39.07
32
+ name: averaged accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BBH (3-Shot)
41
+ type: SaylorTwift/bbh
42
+ split: test
43
+ args:
44
+ num_few_shot: 3
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 44.94
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: MATH Lvl 5 (4-Shot)
57
+ type: lighteval/MATH-Hard
58
+ split: test
59
+ args:
60
+ num_few_shot: 4
61
+ metrics:
62
+ - type: exact_match
63
+ value: 57.55
64
+ name: exact match
65
+ source:
66
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: GPQA (0-shot)
73
+ type: Idavidrein/gpqa
74
+ split: train
75
+ args:
76
+ num_few_shot: 0
77
+ metrics:
78
+ - type: acc_norm
79
+ value: 18.9
80
+ name: acc_norm
81
+ source:
82
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
83
+ name: Open LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: MuSR (0-shot)
89
+ type: TAUR-Lab/MuSR
90
+ args:
91
+ num_few_shot: 0
92
+ metrics:
93
+ - type: acc_norm
94
+ value: 27.83
95
+ name: acc_norm
96
+ source:
97
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: MMLU-PRO (5-shot)
104
+ type: TIGER-Lab/MMLU-Pro
105
+ config: main
106
+ split: test
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 44.53
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FGauss-Opus-14B-R999
115
+ name: Open LLM Leaderboard
116
  ---
117
  ![ccccccccccccc.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ii0oEprS2lm6Zoama7CPe.png)
118
 
 
199
  Small errors in early steps may compound in multi-step proofs and long-form mathematical derivations.
200
 
201
  5. **Prompt Sensitivity**:
202
+ The quality of responses depends on how well the problem is structured and framed within the input prompt.
203
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
204
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Gauss-Opus-14B-R999-details)!
205
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FGauss-Opus-14B-R999&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
206
+
207
+ | Metric |Value (%)|
208
+ |-------------------|--------:|
209
+ |**Average** | 38.80|
210
+ |IFEval (0-Shot) | 39.07|
211
+ |BBH (3-Shot) | 44.94|
212
+ |MATH Lvl 5 (4-Shot)| 57.55|
213
+ |GPQA (0-shot) | 18.90|
214
+ |MuSR (0-shot) | 27.83|
215
+ |MMLU-PRO (5-shot) | 44.53|
216
+