The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: TypeError Message: __init__() got an unexpected keyword argument 'description' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 991, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 319, in _from_yaml_dict yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/splits.py", line 598, in _from_yaml_list return cls.from_split_dict(yaml_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/splits.py", line 569, in from_split_dict split_info = SplitInfo(**split_info) TypeError: __init__() got an unexpected keyword argument 'description'
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card: PrimeVul Dataset Splits
Overview
PrimeVul is a dataset crafted for vulnerability detection in C/C++ code, aimed at training and evaluating code language models under realistic conditions. This dataset card describes the pre-split version (train, validation, and test) uploaded to Hugging Face, based on the PrimeVul-v0.1 release. It includes approximately 7,000 vulnerable functions and 229,000 benign functions from real-world projects, covering over 140 Common Weakness Enumerations (CWEs). The dataset emphasizes accurate labeling, minimal data contamination, and rich metadata for advanced analysis.
Dataset Content
The dataset comprises C/C++ functions with the following key attributes:
- Function Code: Source code of vulnerable and benign functions.
- Labels: Binary labels indicating vulnerability (1) or benign (0), derived using novel techniques for human-level accuracy.
- Metadata:
- Commit Metadata: Project URL, commit URL, and commit message for vulnerability context.
- Vulnerability Metadata: CVE description and NVD link for identified vulnerabilities.
- File-level Metadata: File name, relative path, function location, and a copy of the full file for context.
The dataset is constructed from existing vulnerability detection datasets, reconstructed with improved labels and chronological splits to minimize contamination. In PrimeVul-v0.1, only samples with successfully retrieved metadata are included.
Splits
- Train: Contains the majority of the ~7k vulnerable and ~229k benign functions, used for training vulnerability detection models. This split includes diverse examples across 140+ CWEs, with full metadata for commit, vulnerability, and file-level details.
- Validation: A smaller subset of vulnerable and benign functions, used for hyperparameter tuning and intermediate evaluation. Mirrors the training split’s structure and metadata.
- Test: A distinct subset for final evaluation, featuring paired samples (vulnerable code and its patch) to test models’ ability to detect subtle vulnerabilities. Includes the same metadata as other splits, enabling detailed analysis.
Usage
This dataset is ideal for researchers and developers working on vulnerability detection with code language models. The pre-split format and rich metadata support customized training, evaluation, and in-depth vulnerability studies. The test split’s paired samples are particularly useful for assessing model sensitivity to subtle code changes.
Source
PrimeVul originates from a combination of existing vulnerability datasets, enhanced with accurate labels and metadata. For more details, refer to the original GitHub repository: https://github.com/DLVulDet/PrimeVul.
Citation
If you use this dataset, please cite the original PrimeVul paper:
@article{ding2024primevul,
title={Vulnerability Detection with Code Language Models: How Far Are We?},
author={Yangruibo Ding and Yanjun Fu and Omniyyah Ibrahim and Chawin Sitawarin and Xinyun Chen and Basel Alomair and David Wagner and Baishakhi Ray and Yizheng Chen},
journal={arXiv preprint arXiv:2403.18624},
year={2024}
}
- Downloads last month
- 60