Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
2
6
corpus-id
stringlengths
2
6
score
float64
1
1
120122
21561
1
120122
27604
1
120122
66189
1
120122
84763
1
120126
35683
1
46136
114781
1
16709
154706
1
95799
124993
1
124354
82490
1
114225
78428
1
151384
151305
1
115278
76861
1
23263
36679
1
10706
131769
1
6774
37144
1
6774
54292
1
145811
145953
1
145811
145583
1
121540
58178
1
154271
154281
1
154271
112229
1
120772
25910
1
124358
120407
1
124358
145232
1
124358
74496
1
124358
139332
1
124358
67033
1
124358
124398
1
124358
124135
1
124358
116740
1
124358
123799
1
124358
70232
1
124358
123137
1
124358
106527
1
124358
95416
1
124358
119206
1
124358
112911
1
124358
83643
1
124358
130162
1
124358
137771
1
124358
118921
1
124358
123284
1
124358
116330
1
124358
116412
1
124358
124285
1
124358
136603
1
124358
100440
1
124358
76515
1
124358
18938
1
124358
124280
1
124358
135718
1
124358
120081
1
124358
119183
1
124358
120665
1
124358
120905
1
124358
124680
1
124358
120627
1
124358
58668
1
124358
160520
1
124358
121429
1
124358
135294
1
124358
4696
1
124358
116352
1
124358
140316
1
124358
103004
1
124358
85231
1
124358
144355
1
124358
106961
1
124358
129422
1
124358
101634
1
124358
99745
1
124358
117526
1
124358
129762
1
124358
140114
1
124358
94468
1
124358
100734
1
124358
113410
1
124358
115478
1
124358
83030
1
124358
120512
1
124358
134174
1
124358
117390
1
124358
81683
1
124358
61941
1
64808
62537
1
36174
118965
1
77661
97330
1
77661
117396
1
77661
91461
1
158284
145917
1
16004
58218
1
64001
14037
1
64001
83971
1
49154
116581
1
12201
1516
1
12201
24653
1
12201
127166
1
160041
155206
1
120193
120260
1
144627
137688
1
End of preview. Expand in Data Studio
YAML Metadata Warning: The task_ids "Question answering" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

CQADupstackWordpressRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

CQADupStack: A Benchmark Data Set for Community Question-Answering Research

Task category t2t
Domains Written, Web, Programming
Reference http://nlp.cis.unimelb.edu.au/resources/cqadupstack/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["CQADupstackWordpressRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{hoogeveen2015,
  acmid = {2838934},
  address = {New York, NY, USA},
  articleno = {3},
  author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
  booktitle = {Proceedings of the 20th Australasian Document Computing Symposium (ADCS)},
  doi = {10.1145/2838931.2838934},
  isbn = {978-1-4503-4040-3},
  location = {Parramatta, NSW, Australia},
  numpages = {8},
  pages = {3:1--3:8},
  publisher = {ACM},
  series = {ADCS '15},
  title = {CQADupStack: A Benchmark Data Set for Community Question-Answering Research},
  url = {http://doi.acm.org/10.1145/2838931.2838934},
  year = {2015},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CQADupstackWordpressRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 49146,
        "number_of_characters": 54647154,
        "num_documents": 48605,
        "min_document_length": 65,
        "average_document_length": 1123.7690155333814,
        "max_document_length": 32392,
        "unique_documents": 48605,
        "num_queries": 541,
        "min_query_length": 15,
        "average_query_length": 48.7264325323475,
        "max_query_length": 121,
        "unique_queries": 541,
        "none_queries": 0,
        "num_relevant_docs": 744,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.3752310536044363,
        "max_relevant_docs_per_query": 62,
        "unique_relevant_docs": 744,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
367