license: cc-by-sa-3.0
language: zh
tags:
- information-retrieval
- question-answering
- chinese
- wikipedia
- open-domain-qa
pretty_name: DRCD for Document Retrieval (Simplified Chinese)
DRCD for Document Retrieval (Simplified Chinese)
This dataset is a reformatted version of the Delta Reading Comprehension Dataset (DRCD), converted to Simplified Chinese and adapted for document-level retrieval tasks.
Summary
The dataset transforms the original DRCD QA data into a document retrieval setting, where queries are used to retrieve entire Wikipedia articles rather than individual passages. Each document is the full text of a Wikipedia entry.
The format is compatible with the data structure used in the LongEmbed benchmark and can be directly plugged into LongEmbed evaluation or training pipelines.
Key Features
- 🔤 Language: Simplified Chinese (converted from Traditional Chinese)
- 📚 Domain: General domain, from Wikipedia
- 📄 Granularity: Full-document retrieval, not passage-level
- 🔍 Use Cases: Long-document retrieval, reranking, open-domain QA pre-retrieval
File Structure
corpus.jsonl
Each line is a single Wikipedia article in Simplified Chinese.
{"id": "doc_00001", "title": "心理", "text": "心理学是一门研究人类和动物的心理现象、意识和行为的科学。..."}
queries.jsonl
Each line is a user query (from the DRCD question field).
{"qid": "6513-4-1", "text": "威廉·冯特为何被誉为“实验心理学之父”?"}
qrels.jsonl
Standard relevance judgments mapping queries to relevant documents.
{"qid": "6513-4-1", "doc_id": "6513"}
This structure matches LongEmbed benchmark's data format, making it suitable for evaluating long-document retrievers out of the box.
Example: Document Retrieval Using BM25
You can quickly try out document-level retrieval using BM25 with the following code snippet:
https://gist.github.com/ihainan/a1cf382c6042b90c8e55fe415f1b29e8
Usage:
$ python test_long_embed_bm25.py /home/ihainan/projects/Large/AI/DRCD-Simplified-Chinese/ir_dataset/train
...0
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.404 seconds.
Prefix dict has been built successfully.
...200
...400
...600
...800
...1000
...1200
...1400
...1600
...1800
Acc@1: 64.76%
nDCG@10: 76.61%
License
The dataset is distributed under the Creative Commons Attribution-ShareAlike 3.0 License (CC BY-SA 3.0). You must give appropriate credit and share any derivative works under the same terms.
Citation
If you use this dataset, please also consider citing the original DRCD paper:
@inproceedings{shao2018drcd,
title={DRCD: a Chinese machine reading comprehension dataset},
author={Shao, Chih-Chieh and Chang, Chia-Hsuan and others},
booktitle={Proceedings of the Workshop on Machine Reading for Question Answering},
year={2018}
}
Acknowledgments
- Original data provided by Delta Research Center.
- This project performed format adaptation and Simplified Chinese conversion for IR use cases.