Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
2
6
corpus-id
stringlengths
1
6
score
float64
1
1
197555
12668
1
197555
136758
1
197555
75026
1
197555
176545
1
70175
176059
1
11542
61123
1
11542
69258
1
11542
52076
1
11542
30554
1
89372
80
1
89372
11955
1
89372
101404
1
89372
61663
1
89372
21183
1
89372
895
1
89372
1534
1
89372
56243
1
89372
71534
1
89372
9548
1
89377
33325
1
184813
32930
1
148705
148846
1
103541
17528
1
103546
5767
1
103546
36633
1
103546
62796
1
57481
1291
1
57481
107935
1
57481
172079
1
57481
161486
1
57481
17172
1
57481
100394
1
57481
142698
1
57481
88155
1
120120
122866
1
82990
70414
1
82990
83103
1
82993
54819
1
82993
176603
1
82993
171607
1
82993
167138
1
82993
169707
1
82993
115745
1
82993
115742
1
82993
108941
1
82993
85515
1
82993
94981
1
82993
88907
1
82993
165597
1
82993
191737
1
82993
97972
1
82993
160604
1
82993
92700
1
82993
168587
1
82993
159882
1
82993
85869
1
82993
196156
1
82993
154554
1
82993
148713
1
82993
129160
1
82993
97828
1
82993
107516
1
82993
113822
1
82993
163456
1
82993
190912
1
82993
159494
1
82993
116443
1
82993
55090
1
82993
39153
1
82996
22371
1
82996
155987
1
97154
148144
1
97154
175422
1
97154
194227
1
97154
142617
1
272
29646
1
272
73862
1
272
105991
1
279
76100
1
279
28898
1
279
11791
1
279
124627
1
279
84328
1
279
37065
1
279
54603
1
279
141016
1
279
32598
1
279
14151
1
279
98988
1
279
186950
1
279
135436
1
279
169269
1
279
130139
1
279
146837
1
279
29670
1
279
4666
1
279
94168
1
279
173127
1
279
10083
1
279
79639
1
End of preview. Expand in Data Studio

CQADupstackTexRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

CQADupStack: A Benchmark Data Set for Community Question-Answering Research

Task category t2t
Domains Written, Non-fiction
Reference http://nlp.cis.unimelb.edu.au/resources/cqadupstack/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["CQADupstackTexRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{hoogeveen2015,
  acmid = {2838934},
  address = {New York, NY, USA},
  articleno = {3},
  author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
  booktitle = {Proceedings of the 20th Australasian Document Computing Symposium (ADCS)},
  doi = {10.1145/2838931.2838934},
  isbn = {978-1-4503-4040-3},
  location = {Parramatta, NSW, Australia},
  numpages = {8},
  pages = {3:1--3:8},
  publisher = {ACM},
  series = {ADCS '15},
  title = {CQADupStack: A Benchmark Data Set for Community Question-Answering Research},
  url = {http://doi.acm.org/10.1145/2838931.2838934},
  year = {2015},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CQADupstackTexRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 71090,
        "number_of_characters": 88645392,
        "num_documents": 68184,
        "min_document_length": 61,
        "average_document_length": 1298.09043177285,
        "max_document_length": 31204,
        "unique_documents": 68184,
        "num_queries": 2906,
        "min_query_length": 15,
        "average_query_length": 46.935306262904334,
        "max_query_length": 133,
        "unique_queries": 2906,
        "none_queries": 0,
        "num_relevant_docs": 5154,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.7735719201651754,
        "max_relevant_docs_per_query": 146,
        "unique_relevant_docs": 5154,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
389