Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
math
Libraries:
Datasets
License:
cloudcatcher2 commited on
Commit
3c2ff63
·
verified ·
1 Parent(s): e0c1f96

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - math
9
+ ---
10
+
11
+ ## Overview
12
+ VCBench provides a standardized framework for evaluating vision-language models. This document outlines the procedures for both standard evaluation and GPT-assisted evaluation of your model's outputs.
13
+
14
+ ## 1. Standard Evaluation
15
+
16
+ ### 1.1 Output Format Requirements
17
+ Models must produce outputs in JSONL format with the following structure:
18
+ ```
19
+ {"id": <int>, "pred_answer": "<answer_letter>"}
20
+ {"id": <int>, "pred_answer": "<answer_letter>"}
21
+ ...
22
+ ```
23
+
24
+ **Example File (`submit.jsonl`):**
25
+ ```json
26
+ {"id": 1, "pred_answer": "A"}
27
+ {"id": 2, "pred_answer": "B"}
28
+ {"id": 3, "pred_answer": "C"}
29
+ ```
30
+
31
+ ### 1.2 Evaluation Procedure
32
+ 1. Ensure your predictions file follows the specified format
33
+ 2. Run the evaluation script:
34
+ ```bash
35
+ python evaluate_vcbench.py -p ./path/to/predictions.jsonl -g ./path/to/VCBench_with_answer.json
36
+ ```
37
+ `VCBench_with_answer.json` is the ground truth file which can be downloaded from [here](https://huggingface.co/datasets/cloudcatcher2/VCBench/resolve/main/VCBench_with_answer.json).
38
+
39
+ ## 2. GPT-Assisted Evaluation
40
+
41
+ ### 2.1 Output Format Requirements
42
+ For natural language responses, use this JSONL format:
43
+ ```
44
+ {"id": <int>, "pred_answer": "<natural_language_response>"}
45
+ {"id": <int>, "pred_answer": "<natural_language_response>"}
46
+ ...
47
+ ```
48
+
49
+ **Example File (`nl_predictions.jsonl`):**
50
+ ```json
51
+ {"id": 1, "pred_answer": "The correct answer is A"}
52
+ {"id": 2, "pred_answer": "After careful analysis, option B appears correct"}
53
+ {"id": 3, "pred_answer": "C is the right choice"}
54
+ ```
55
+
56
+ ### 2.2 Environment Setup
57
+ Set your Dashscope API key:
58
+ ```bash
59
+ export DASHSCOPE_KEY="your_api_key_here"
60
+ ```
61
+
62
+ ### 2.3 Evaluation Procedure
63
+ ```bash
64
+ python evaluate_vcbench_by_gpt.py -p ./path/to/nl_predictions.jsonl -g ./path/to/VCBench_with_answer.json
65
+ ```
66
+
67
+ ## 3. Expected Output
68
+ Both evaluation scripts will provide:
69
+ - Overall accuracy percentage
70
+ - Per-question-type accuracy breakdown
71
+ - Progress updates during evaluation
72
+
73
+
74
+
75
+ ## Citation
76
+
77
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
78
+
79
+ **BibTeX:**
80
+
81
+ ```bibtex
82
+ @misc{wong2025vcbench
83
+ author = {Zhikai Wang and Jiashuo Sun and Wenqi Zhang and Zhiqiang Hu and Xin Li and Fan Wang and Deli Zhao},
84
+ title = {Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency},
85
+ year = {2025},
86
+ eprint = {2504.18589},
87
+ archivePrefix = {arxiv},
88
+ primaryClass = {cs.CV},
89
+ url = {https://arxiv.org/abs/2504.18589}
90
+ }
91
+ ```
92
+
93
+ ## Dataset Card Authors
94
+
95
+ - [Zhikai Wang](https://cloudcatcher888.github.io/): [email protected]
96
+ - [Jiashuo Sun](https://gasolsun36.github.io/): [email protected]