Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
2
5
corpus-id
stringlengths
1
5
score
float64
1
1
28994
53865
1
11544
503
1
11544
18547
1
11544
26908
1
11544
61307
1
11544
68588
1
11544
44308
1
11544
49966
1
11544
60680
1
11544
52031
1
11544
55814
1
11544
57308
1
11544
60875
1
11544
22430
1
11544
61682
1
11544
978
1
11544
19518
1
11544
42439
1
11544
47982
1
11544
29411
1
11544
25590
1
11544
35738
1
11544
50877
1
11544
58164
1
11544
45424
1
11544
56395
1
11544
61555
1
11544
67795
1
11544
9090
1
11544
50142
1
11544
20936
1
11544
9908
1
11544
55873
1
11544
18443
1
11544
67965
1
11544
27490
1
11544
54942
1
11544
67816
1
25062
19
1
25062
45570
1
19714
20838
1
19714
56206
1
19714
65568
1
19714
47162
1
19714
15501
1
19714
61384
1
19714
51878
1
19714
11815
1
19714
34227
1
19714
154
1
19714
31739
1
19714
39898
1
19714
3505
1
19714
45214
1
19714
54034
1
19714
30180
1
19714
11248
1
19714
10951
1
19714
36180
1
19714
14982
1
19714
25362
1
19714
20856
1
19714
16050
1
19714
33087
1
19714
33887
1
19714
18480
1
19714
42175
1
19714
6495
1
19714
12743
1
19714
54756
1
19714
48242
1
19714
26774
1
19714
11356
1
19714
18389
1
19714
6938
1
19714
13537
1
19714
26667
1
19714
23942
1
19714
11112
1
19714
19585
1
19714
31022
1
19714
25648
1
19714
54962
1
19714
13364
1
19714
56341
1
19714
59602
1
19714
6425
1
19714
6203
1
19714
5643
1
19714
4908
1
19714
23668
1
19714
39357
1
19714
13367
1
19714
5641
1
19714
28381
1
19714
34591
1
19714
27728
1
19714
5098
1
19714
59974
1
19714
63513
1
End of preview. Expand in Data Studio
YAML Metadata Warning: The task_ids "Question answering" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

CQADupstackWebmastersRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

CQADupStack: A Benchmark Data Set for Community Question-Answering Research

Task category t2t
Domains Written, Web
Reference http://nlp.cis.unimelb.edu.au/resources/cqadupstack/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["CQADupstackWebmastersRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{hoogeveen2015,
  acmid = {2838934},
  address = {New York, NY, USA},
  articleno = {3},
  author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
  booktitle = {Proceedings of the 20th Australasian Document Computing Symposium (ADCS)},
  doi = {10.1145/2838931.2838934},
  isbn = {978-1-4503-4040-3},
  location = {Parramatta, NSW, Australia},
  numpages = {8},
  pages = {3:1--3:8},
  publisher = {ACM},
  series = {ADCS '15},
  title = {CQADupStack: A Benchmark Data Set for Community Question-Answering Research},
  url = {http://doi.acm.org/10.1145/2838931.2838934},
  year = {2015},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CQADupstackWebmastersRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 17911,
        "number_of_characters": 12355347,
        "num_documents": 17405,
        "min_document_length": 49,
        "average_document_length": 708.3635736857225,
        "max_document_length": 24968,
        "unique_documents": 17405,
        "num_queries": 506,
        "min_query_length": 15,
        "average_query_length": 51.93478260869565,
        "max_query_length": 135,
        "unique_queries": 506,
        "none_queries": 0,
        "num_relevant_docs": 1395,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 2.7569169960474307,
        "max_relevant_docs_per_query": 207,
        "unique_relevant_docs": 1395,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
373