Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
1
6
corpus-id
stringlengths
2
6
score
float64
1
1
52462
49462
1
63024
68660
1
63022
62860
1
10703
10619
1
10703
73282
1
10703
90412
1
10703
10701
1
21616
52365
1
46862
65485
1
46866
46762
1
44426
3127
1
97939
80376
1
12833
111676
1
23129
10788
1
23129
107228
1
50415
25518
1
52818
104162
1
60905
57067
1
75289
14542
1
99850
24442
1
21937
21033
1
99993
61243
1
112238
25494
1
33756
40674
1
102106
37787
1
109853
30084
1
109853
30106
1
103703
88092
1
72392
112378
1
72393
26892
1
103709
9020
1
101706
88865
1
92748
58680
1
45449
87329
1
45449
63046
1
45449
59026
1
45449
81103
1
45449
103494
1
45449
102779
1
45449
103359
1
8032
40718
1
8032
19494
1
8032
17679
1
88723
91977
1
89232
78510
1
56843
17712
1
19580
75376
1
53569
66194
1
62121
91638
1
28406
25936
1
110551
4376
1
55129
54291
1
23717
45405
1
59203
57642
1
78923
77018
1
104165
104161
1
57784
36801
1
75092
113743
1
57438
2555
1
36774
28353
1
27822
88053
1
10352
88806
1
70301
68359
1
49844
31706
1
49844
52750
1
49844
52250
1
48872
53676
1
82175
73836
1
94280
80589
1
29257
20331
1
72051
72242
1
72051
72176
1
68843
104142
1
1449
58647
1
1449
4346
1
35624
14323
1
35624
46701
1
35624
19135
1
35624
25727
1
35624
19098
1
35624
33423
1
25522
8348
1
74244
9662
1
41829
85296
1
75119
75296
1
86444
87000
1
86444
86773
1
14142
7839
1
14142
111490
1
14142
17537
1
14142
10756
1
14142
113435
1
87478
84572
1
6470
80
1
6470
97872
1
110409
110372
1
46716
48961
1
46716
75485
1
68956
69577
1
38855
21761
1
End of preview. Expand in Data Studio

CQADupstackGisRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

CQADupStack: A Benchmark Data Set for Community Question-Answering Research

Task category t2t
Domains Written, Non-fiction
Reference http://nlp.cis.unimelb.edu.au/resources/cqadupstack/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["CQADupstackGisRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{hoogeveen2015,
  acmid = {2838934},
  address = {New York, NY, USA},
  articleno = {3},
  author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
  booktitle = {Proceedings of the 20th Australasian Document Computing Symposium (ADCS)},
  doi = {10.1145/2838931.2838934},
  isbn = {978-1-4503-4040-3},
  location = {Parramatta, NSW, Australia},
  numpages = {8},
  pages = {3:1--3:8},
  publisher = {ACM},
  series = {ADCS '15},
  title = {CQADupStack: A Benchmark Data Set for Community Question-Answering Research},
  url = {http://doi.acm.org/10.1145/2838931.2838934},
  year = {2015},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CQADupstackGisRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 38522,
        "number_of_characters": 38178794,
        "num_documents": 37637,
        "min_document_length": 52,
        "average_document_length": 1013.167813587693,
        "max_document_length": 28938,
        "unique_documents": 37637,
        "num_queries": 885,
        "min_query_length": 15,
        "average_query_length": 52.2,
        "max_query_length": 140,
        "unique_queries": 885,
        "none_queries": 0,
        "num_relevant_docs": 1114,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.2587570621468926,
        "max_relevant_docs_per_query": 22,
        "unique_relevant_docs": 1114,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
347