Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 7,407 Bytes
24e5a2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3e399d
 
 
24e5a2c
c3e399d
24e5a2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3e399d
62b08e7
dd7f99e
 
 
 
 
 
c3e399d
 
 
 
 
 
 
 
 
 
 
dd7f99e
c3e399d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
025a022
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
---
license: apache-2.0
dataset_info:
  features:
  - name: org
    dtype: string
  - name: repo
    dtype: string
  - name: number
    dtype: int64
  - name: state
    dtype: string
  - name: title
    dtype: string
  - name: body
    dtype: string
  - name: base
    dtype: string
  - name: resolved_issues
    list:
    - name: body
      dtype: string
    - name: number
      dtype: int64
    - name: title
      dtype: string
  - name: fix_patch
    dtype: string
  - name: test_patch
    dtype: string
  - name: fixed_tests
    dtype: string
  - name: p2p_tests
    dtype: string
  - name: f2p_tests
    dtype: string
  - name: s2p_tests
    dtype: string
  - name: n2p_tests
    dtype: string
  - name: run_result
    dtype: string
  - name: test_patch_result
    dtype: string
  - name: fix_patch_result
    dtype: string
  - name: instance_id
    dtype: string
  splits:
  - name: c
    num_bytes: 27137585
    num_examples: 128
  - name: cpp
    num_bytes: 6406845
    num_examples: 129
  - name: go
    num_bytes: 171175811
    num_examples: 428
  - name: java
    num_bytes: 15981812
    num_examples: 50
  - name: javascript
    num_bytes: 505878991
    num_examples: 356
  - name: rust
    num_bytes: 40755929
    num_examples: 239
  - name: typescript
    num_bytes: 823172694
    num_examples: 224
  download_size: 375407095
  dataset_size: 1590509667
configs:
- config_name: default
  data_files:
  - split: c
    path: data/c-*
  - split: cpp
    path: data/cpp-*
  - split: go
    path: data/go-*
  - split: java
    path: data/java-*
  - split: javascript
    path: data/javascript-*
  - split: rust
    path: data/rust-*
  - split: typescript
    path: data/typescript-*
---

# Overview
We are extremely delighted to release Multi-SWE-Bench.
Multi-SWE-Bench aims to build a multi-language benchmark dataset containing real software engineering scenarios for evaluating the ability of LLM to solve real software engineering problems. 
The dataset supports multiple languages, currently including C, C++, Java, Javascript, Typescript, Rust, Go.

# Data Instances Structure
An example of a Multi-SWE-bench datum is as follows:
```
org: (str) - Organization name identifier from Github.
repo: (str) - Repository name identifier from Github.
number: (int) - The PR number.
state: (str) - The PR state.
title: (str) - The PR title.
body: (str) - The PR body.
base: (dict) - The target branch information of the PR
resolved_issues: (list) - A json list of strings that represent issues that resolved by PR application.
fix_patch: (str) - A fix-file patch that was contributed by the solution PR.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
fixed_tests: (dict) - A json dict of strings that represent tests that should be fixed after the PR application.
p2p_tests: (dict) - The tests that should pass before and after the PR application.
f2p_tests: (dict) - The tests resolved by the PR and tied to the issue resolution.
s2p_tests: (dict) - The tests that should skip before the PR application, and pass after the PR application.
n2p_tests: (dict) - The tests that did not exist before the PR application and tests that should be passed after the PR application.
run_result: (dict) - Overall run results, including number of tests passed, number of tests failed, etc.
test_patch_result: (dict) -  The result after the test patch was applied.
fix_patch_result: (dict) - The result after all the patches were applied.
instance_id: (str) - A formatted instance identifier, usually as org__repo_PR-number.
```

# Usage
Load C++ related datasets:
```python
from datasets import load_dataset

cpp_split = load_dataset("msb/msb, split='cpp')
```

Because huggingface's dataset library does not support complex nested structures, 
there are nested structures within these fields that have been serialized in the original dataset(huggingface), 
and you'll have to deserialize these if you want to use this dataset.
```python
SERIALIZATION_FIELDS = [
    'base', 'fixed tests', 'p2p_tests', 'f2p_tests',
    's2p_tests', 'n2p_tests', 'run_result',
    'test_patch_result', 'fix_patch_result'
]
```


## sample
```python
from datasets import load_dataset, config
import pandas as pd
import os
import json

# Constant definitions
# There are nested structures within these fields, which were serialized in the original dataset, and now these need to be deserialized
SERIALIZATION_FIELDS = [
    'base', 'fixed tests', 'p2p_tests', 'f2p_tests',
    's2p_tests', 'n2p_tests', 'run_result',
    'test_patch_result', 'fix_patch_result'
]
CACHE_DIR = 'D:/huggingface_cache'

def safe_deserialize(value):
    """Safely deserialize a JSON string"""
    try:
        if value in (None, ''):
            return None
        return json.loads(value)
    except (TypeError, json.JSONDecodeError) as e:
        print(f"Deserialization failed: {str(e)}")
        return value

def load_hf_dataset():
    """Load a HuggingFace dataset"""
    os.environ['HF_HOME'] = CACHE_DIR
    config.HF_DATASETS_CACHE = CACHE_DIR
    return load_dataset("Hagon/test2", split='cpp')

def analyze_dataset_structure(dataset):
    """Analyze and print the dataset structure"""
    print(f"Dataset size: {len(dataset)}")
    print("\nDataset structure analysis: " + "-" * 50)
    print("Field names and types:")
    for name, dtype in dataset.features.items():
        print(f"  {name}: {str(dtype)}")

def print_data_types(dataset, sample_count=3):
    """Print the data types of sample data"""
    print(f"\nData types of the first {sample_count} samples:")
    for i in range(min(sample_count, len(dataset))):
        print(f"\nSample {i}:")
        for key, value in dataset[i].items():
            print(f"  {key}: {type(value).__name__} ({len(str(value))} chars)")

def analyze_serialization(dataset, sample_count=3):
    """Analyze the deserialization results of fields"""
    print("\nDeserialization result analysis: " + "-" * 50)
    for i in range(min(sample_count, len(dataset))):
        print(f"\nSample {i}:")
        item = dataset[i]
        for key in SERIALIZATION_FIELDS:
            safe_key = key.replace(' ', '_')
            raw_value = item.get(safe_key)
            deserialized = safe_deserialize(raw_value)
            
            print(f"Field [{key}]:")
            print(f"  Original type: {type(raw_value).__name__}")
            print(f"  Deserialized type: {type(deserialized).__name__ if deserialized else 'None'}")
            
            if isinstance(deserialized, dict):
                sample = dict(list(deserialized.items())[:2])
                print(f"  Sample content: {str(sample)[:200]}...")
            elif deserialized:
                print(f"  Content preview: {str(deserialized)[:200]}...")
            else:
                print("  Empty/Invalid data")

def main():
    """Main function entry"""
    dataset = load_hf_dataset()
    # analyze_dataset_structure(dataset)
    # print_data_types(dataset)
    analyze_serialization(dataset)

if __name__ == "__main__":
    main()

```

# Citation
If you found SWE-bench or our Multi-SWE-bench helpful for your work, please cite as follows:
```
@misc{zan2025multiswebench,
      title={Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving}, 
      author={Xxx},
      year={2025},
      eprint={2503.17315},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={xxx}, 
}
```