Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
test2 / README.md
Hagon's picture
Update README.md
62b08e7 verified
metadata
license: apache-2.0
dataset_info:
  features:
    - name: org
      dtype: string
    - name: repo
      dtype: string
    - name: number
      dtype: int64
    - name: state
      dtype: string
    - name: title
      dtype: string
    - name: body
      dtype: string
    - name: base
      dtype: string
    - name: resolved_issues
      list:
        - name: body
          dtype: string
        - name: number
          dtype: int64
        - name: title
          dtype: string
    - name: fix_patch
      dtype: string
    - name: test_patch
      dtype: string
    - name: fixed_tests
      dtype: string
    - name: p2p_tests
      dtype: string
    - name: f2p_tests
      dtype: string
    - name: s2p_tests
      dtype: string
    - name: n2p_tests
      dtype: string
    - name: run_result
      dtype: string
    - name: test_patch_result
      dtype: string
    - name: fix_patch_result
      dtype: string
    - name: instance_id
      dtype: string
  splits:
    - name: c
      num_bytes: 27137585
      num_examples: 128
    - name: cpp
      num_bytes: 6406845
      num_examples: 129
    - name: go
      num_bytes: 171175811
      num_examples: 428
    - name: java
      num_bytes: 15981812
      num_examples: 50
    - name: javascript
      num_bytes: 505878991
      num_examples: 356
    - name: rust
      num_bytes: 40755929
      num_examples: 239
    - name: typescript
      num_bytes: 823172694
      num_examples: 224
  download_size: 375407095
  dataset_size: 1590509667
configs:
  - config_name: default
    data_files:
      - split: c
        path: data/c-*
      - split: cpp
        path: data/cpp-*
      - split: go
        path: data/go-*
      - split: java
        path: data/java-*
      - split: javascript
        path: data/javascript-*
      - split: rust
        path: data/rust-*
      - split: typescript
        path: data/typescript-*

Overview

We are extremely delighted to release Multi-SWE-Bench. Multi-SWE-Bench aims to build a multi-language benchmark dataset containing real software engineering scenarios for evaluating the ability of LLM to solve real software engineering problems. The dataset supports multiple languages, currently including C, C++, Java, Javascript, Typescript, Rust, Go.

Data Instances Structure

An example of a Multi-SWE-bench datum is as follows:

org: (str) - Organization name identifier from Github.
repo: (str) - Repository name identifier from Github.
number: (int) - The PR number.
state: (str) - The PR state.
title: (str) - The PR title.
body: (str) - The PR body.
base: (dict) - The target branch information of the PR
resolved_issues: (list) - A json list of strings that represent issues that resolved by PR application.
fix_patch: (str) - A fix-file patch that was contributed by the solution PR.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
fixed_tests: (dict) - A json dict of strings that represent tests that should be fixed after the PR application.
p2p_tests: (dict) - The tests that should pass before and after the PR application.
f2p_tests: (dict) - The tests resolved by the PR and tied to the issue resolution.
s2p_tests: (dict) - The tests that should skip before the PR application, and pass after the PR application.
n2p_tests: (dict) - The tests that did not exist before the PR application and tests that should be passed after the PR application.
run_result: (dict) - Overall run results, including number of tests passed, number of tests failed, etc.
test_patch_result: (dict) -  The result after the test patch was applied.
fix_patch_result: (dict) - The result after all the patches were applied.
instance_id: (str) - A formatted instance identifier, usually as org__repo_PR-number.

Usage

Load C++ related datasets:

from datasets import load_dataset

cpp_split = load_dataset("msb/msb, split='cpp')

Because huggingface's dataset library does not support complex nested structures, there are nested structures within these fields that have been serialized in the original dataset(huggingface), and you'll have to deserialize these if you want to use this dataset.

SERIALIZATION_FIELDS = [
    'base', 'fixed tests', 'p2p_tests', 'f2p_tests',
    's2p_tests', 'n2p_tests', 'run_result',
    'test_patch_result', 'fix_patch_result'
]

sample

from datasets import load_dataset, config
import pandas as pd
import os
import json

# Constant definitions
# There are nested structures within these fields, which were serialized in the original dataset, and now these need to be deserialized
SERIALIZATION_FIELDS = [
    'base', 'fixed tests', 'p2p_tests', 'f2p_tests',
    's2p_tests', 'n2p_tests', 'run_result',
    'test_patch_result', 'fix_patch_result'
]
CACHE_DIR = 'D:/huggingface_cache'

def safe_deserialize(value):
    """Safely deserialize a JSON string"""
    try:
        if value in (None, ''):
            return None
        return json.loads(value)
    except (TypeError, json.JSONDecodeError) as e:
        print(f"Deserialization failed: {str(e)}")
        return value

def load_hf_dataset():
    """Load a HuggingFace dataset"""
    os.environ['HF_HOME'] = CACHE_DIR
    config.HF_DATASETS_CACHE = CACHE_DIR
    return load_dataset("Hagon/test2", split='cpp')

def analyze_dataset_structure(dataset):
    """Analyze and print the dataset structure"""
    print(f"Dataset size: {len(dataset)}")
    print("\nDataset structure analysis: " + "-" * 50)
    print("Field names and types:")
    for name, dtype in dataset.features.items():
        print(f"  {name}: {str(dtype)}")

def print_data_types(dataset, sample_count=3):
    """Print the data types of sample data"""
    print(f"\nData types of the first {sample_count} samples:")
    for i in range(min(sample_count, len(dataset))):
        print(f"\nSample {i}:")
        for key, value in dataset[i].items():
            print(f"  {key}: {type(value).__name__} ({len(str(value))} chars)")

def analyze_serialization(dataset, sample_count=3):
    """Analyze the deserialization results of fields"""
    print("\nDeserialization result analysis: " + "-" * 50)
    for i in range(min(sample_count, len(dataset))):
        print(f"\nSample {i}:")
        item = dataset[i]
        for key in SERIALIZATION_FIELDS:
            safe_key = key.replace(' ', '_')
            raw_value = item.get(safe_key)
            deserialized = safe_deserialize(raw_value)
            
            print(f"Field [{key}]:")
            print(f"  Original type: {type(raw_value).__name__}")
            print(f"  Deserialized type: {type(deserialized).__name__ if deserialized else 'None'}")
            
            if isinstance(deserialized, dict):
                sample = dict(list(deserialized.items())[:2])
                print(f"  Sample content: {str(sample)[:200]}...")
            elif deserialized:
                print(f"  Content preview: {str(deserialized)[:200]}...")
            else:
                print("  Empty/Invalid data")

def main():
    """Main function entry"""
    dataset = load_hf_dataset()
    # analyze_dataset_structure(dataset)
    # print_data_types(dataset)
    analyze_serialization(dataset)

if __name__ == "__main__":
    main()

Citation

If you found SWE-bench or our Multi-SWE-bench helpful for your work, please cite as follows:

@misc{zan2025multiswebench,
      title={Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving}, 
      author={Xxx},
      year={2025},
      eprint={2503.17315},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={xxx}, 
}