Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    HfHubHTTPError
Message:      504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/LsTam/CQuAE/resolve/main/README.md

<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
</body>
</html>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 211, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 277, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1853, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1599, in dataset_module_factory
                  dataset_readme_path = api.hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 5548, in hf_hub_download
                  return hf_hub_download(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
                  return f(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1232, in hf_hub_download
                  return _hf_hub_download_to_cache_dir(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1381, in _hf_hub_download_to_cache_dir
                  _download_to_tmp_and_move(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1915, in _download_to_tmp_and_move
                  http_get(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 455, in http_get
                  r = _request_wrapper(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 388, in _request_wrapper
                  hf_raise_for_status(response)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
                  raise _format(HfHubHTTPError, str(e), response) from e
              huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/LsTam/CQuAE/resolve/main/README.md
              
              <html>
              <head><title>504 Gateway Time-out</title></head>
              <body>
              <center><h1>504 Gateway Time-out</h1></center>
              </body>
              </html>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CQuAE: A New French Question-Answering Corpus for Teaching Assistant

CQuAE (Contextualised Question-Answering for Education) is a French question-answering dataset in the domain of secondary education. It has been designed to facilitate the development of virtual teaching assistants, with a particular focus on creating and answering complex questions that go beyond simple fact extraction. CQuAE includes questions, answers, and corresponding source documents (excerpts of textbook or Wikipedia articles). By providing both straightforward and deeper, multi-sentence, or interpretative queries, the dataset supports diverse QA tasks, including factual, definitional, explanatory, and synthetic question types.

This dataset was described in: “CQuAE : Un nouveau corpus de question-réponse pour l’enseignement”
by Thomas Gerald, Louis Tamames, Sofiane Ettayeb, Patrick Paroubek, Anne Vilnat.



Dataset Summary

CQuAE is designed to train and evaluate QA systems capable of handling a range of question types in French. Questions are grounded in educational material from various subject areas—mainly history, geography, and sciences—at the late middle-school and early high-school levels. Each entry comprises:

• A manually written question (French).
• The corresponding source document excerpt(s).
• A manually written answer (in French).
• The question’s type (factual, definition, course-level explanatory, or synthetic).
• Metadata such as a question identifier and document title(s).

One of the key goals behind CQuAE is to collect and evaluate questions that require varying levels of reasoning complexity. While many QA datasets in French emphasize short factual or named-entity answers, CQuAE includes longer, more elaborate responses that often span multiple elements of a text.


Supported Tasks

Question Answering (QA): Given a question and a relevant document, generate or extract an answer.
Complex QA: Some questions require multi-sentence answers, synthesis, or deeper interpretation.
Document Retrieval (RAG): Identify the relevant passages in the larger corpus to answer a question.


Dataset Structure

The dataset is organized as follows (feature schema applies to all splits):

train_v1: 10,431 examples.

  • First version of the training data.

train_v2: 7,156 examples.

  • A partially “human-filtered” or corrected version of the training data (some problematic instances from v1 were filtered or improved).

eval: 512 examples.

  • Evaluation split for model development.

test: 512 examples.

  • Standard test set.

test_top1: 512 examples.

  • Same underlying question set as “test,” except that the single document provided here was retrieved automatically from the full collection via a retrieval-augmented generation (RAG) approach. In other words, it may differ from the original reference document used by annotators.

A high-level representation of the dataset structure:


Data Fields

Each split contains the following fields:

question (string): The question in French.
title (string): Source title (Chapter of the textbook or wikipedia article).
documents (list): The list of text excerpts used by the annotator to create the question and its answer.
type (string): The type of question. Possible values include:

  • “factuelle” (factual)
  • “définition” (definition)
  • “cours” (explanatory course-level)
  • “synthèse” (synthesis-based)
    qid (int): A unique question identifier.
    documents_title (string): Title(s) or metadata for the document(s).
    output (string): The annotated answer in French.

Versions Summary

train_v1: Original stage of the dataset with over 10k QA pairs.
train_v2: A refined set of ~7k QA pairs produced after a thorough human review and correction phase (e.g., addressing syntax, relevance, completeness).
eval, test: Held-out sets of 512 QA items each, created from the corrected dataset (v2).
test_top1: Mirrors “test,” but includes automatically retrieved passages (via RAG) as opposed to the original documents used during annotation.


Source Data and Construction

CQuAE is composed of short extracts from textbooks (e.g., “lelivrescolaire.fr”) and filtered Wikipedia articles chosen to match middle- and high-school curricula in fields like:

• History
• Geography
• Sciences de la Vie et de la Terre (Biology/Earth Sciences)
• Éducation Civique

Wikipedia articles were split into smaller parts (up to three paragraphs) for manageability. In total, thousands of texts were collected, though not all were annotated. Two groups of annotators contributed:

Group A: ~20 annotators (non-teachers).
Group B: 6 annotators with teaching experience.

Each annotator was asked to produce:

  1. A question grounded in the document.
  2. The type of the question (factual, definition, course, synthesis).
  3. The document snippet justifying the question.
  4. Evidence for the answer (the relevant phrases in the text).
  5. A written answer in French.

Annotation Process and Types of Questions

Questions were created to vary in difficulty:

  1. Factuelle (Factual): Straightforward facts (e.g., event, date, person, location).
  2. Définition (Definition): Explaining a term or concept.
  3. Cours (Course-level): More detailed or explanatory answers derived from the text.
  4. Synthèse (Synthesis): Answers that require reasoned aggregation or interpretation of multiple text elements.

A manual correction phase was then carried out to improve the quality of the initial annotations. Approximately 8,000–10,000 items were rechecked to address issues like syntax, missing context, or irrelevance. As a result, train_v2 is slightly smaller but generally of higher quality.


Applications and Examples

CQuAE can be employed for:

Training QA Systems: Evaluate model performance on fact-based vs. complex (explanatory, synthesis) queries.
Retrieval-Augmented Generation (RAG): test_top1 split specifically tests how well a system can retrieve relevant passages from a large corpus.
Multilingual or Cross-lingual Adaptation: Although the dataset is in French, it can serve as a testbed for domain adaptation in educational contexts.
Automatic Question and Answer Generation: Evaluate how models produce realistic and pedagogically viable Q&A pairs.


License

Creative Commons Attribution-NonCommercial 4.0 International

Citation

CQuAE : Un nouveau corpus de question-réponse pour l’enseignement (Gerald et al., JEP/TALN/RECITAL 2024)

If you use or reference CQuAE, please cite: @inproceedings{gerald-etal-2024-cquae, title = "{CQ}u{AE} : Un nouveau corpus de question-r{'e}ponse pour l`enseignement", author = "Gerald, Thomas and Tamames, Louis and Ettayeb, Sofiane and Paroubek, Patrick and Vilnat, Anne", year = "2024", publisher = "ATALA and AFPC", url = "https://aclanthology.org/2024.jeptalnrecital-taln.4/", language = "fra", }

Downloads last month
106