Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
math
Libraries:
Datasets
License:
File size: 2,777 Bytes
3c2ff63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- math
---

## Overview
VCBench provides a standardized framework for evaluating vision-language models. This document outlines the procedures for both standard evaluation and GPT-assisted evaluation of your model's outputs.

## 1. Standard Evaluation

### 1.1 Output Format Requirements
Models must produce outputs in JSONL format with the following structure:
```
{"id": <int>, "pred_answer": "<answer_letter>"}
{"id": <int>, "pred_answer": "<answer_letter>"}
...
```

**Example File (`submit.jsonl`):**
```json
{"id": 1, "pred_answer": "A"}
{"id": 2, "pred_answer": "B"}
{"id": 3, "pred_answer": "C"}
```

### 1.2 Evaluation Procedure
1. Ensure your predictions file follows the specified format
2. Run the evaluation script:
   ```bash
   python evaluate_vcbench.py -p ./path/to/predictions.jsonl -g ./path/to/VCBench_with_answer.json
   ```
`VCBench_with_answer.json` is the ground truth file which can be downloaded from [here](https://huggingface.co/datasets/cloudcatcher2/VCBench/resolve/main/VCBench_with_answer.json).

## 2. GPT-Assisted Evaluation

### 2.1 Output Format Requirements
For natural language responses, use this JSONL format:
```
{"id": <int>, "pred_answer": "<natural_language_response>"}
{"id": <int>, "pred_answer": "<natural_language_response>"}
...
```

**Example File (`nl_predictions.jsonl`):**
```json
{"id": 1, "pred_answer": "The correct answer is A"}
{"id": 2, "pred_answer": "After careful analysis, option B appears correct"}
{"id": 3, "pred_answer": "C is the right choice"}
```

### 2.2 Environment Setup
Set your Dashscope API key:
   ```bash
   export DASHSCOPE_KEY="your_api_key_here"
   ```

### 2.3 Evaluation Procedure
```bash
python evaluate_vcbench_by_gpt.py -p ./path/to/nl_predictions.jsonl -g ./path/to/VCBench_with_answer.json
```

## 3. Expected Output
Both evaluation scripts will provide:
- Overall accuracy percentage
- Per-question-type accuracy breakdown
- Progress updates during evaluation



## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@misc{wong2025vcbench
  author    = {Zhikai Wang and Jiashuo Sun and Wenqi Zhang and Zhiqiang Hu and Xin Li and Fan Wang and Deli Zhao},
  title     = {Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency},
  year      = {2025},
  eprint    = {2504.18589},
  archivePrefix = {arxiv},
  primaryClass  = {cs.CV},
  url       = {https://arxiv.org/abs/2504.18589}
}
```

## Dataset Card Authors

- [Zhikai Wang](https://cloudcatcher888.github.io/): [email protected]
- [Jiashuo Sun](https://gasolsun36.github.io/): [email protected]