Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
WikiSQE_experiment / README.md
ando55's picture
Update README.md
1c6cc86 verified
metadata
annotations_creators:
  - no-annotation
language:
  - en
license: cc-by-sa-4.0
multilinguality:
  - monolingual
source_datasets:
  - original
task_categories:
  - text-classification
pretty_name: wikisqe_experiment
configs:
  - config_name: citation
    data_files:
      - split: train
        path: citation/train*
      - split: val
        path: citation/val*
      - split: test
        path: citation/test*
  - config_name: information addition
    data_files:
      - split: train
        path: information addition/train*
      - split: val
        path: information addition/val*
      - split: test
        path: information addition/test*
  - config_name: syntactic or semantic revision
    data_files:
      - split: train
        path: syntactic or semantic revision/train*
      - split: val
        path: syntactic or semantic revision/val*
      - split: test
        path: syntactic or semantic revision/test*
  - config_name: sac
    data_files:
      - split: train
        path: sac/train*
      - split: val
        path: sac/val*
      - split: test
        path: sac/test*
  - config_name: other
    data_files:
      - split: train
        path: other/train*
      - split: val
        path: other/val*
      - split: test
        path: other/test*
  - config_name: all
    data_files:
      - split: train
        path: all/train*
      - split: val
        path: all/val*
      - split: test
        path: all/test*
  - config_name: disputed claim
    data_files:
      - split: train
        path: disputed claim/train*
      - split: val
        path: disputed claim/val*
      - split: test
        path: disputed claim/test*
  - config_name: disambiguation needed
    data_files:
      - split: train
        path: disambiguation needed/train*
      - split: val
        path: disambiguation needed/val*
      - split: test
        path: disambiguation needed/test*
  - config_name: dubious
    data_files:
      - split: train
        path: dubious/train*
      - split: val
        path: dubious/val*
      - split: test
        path: dubious/test*
  - config_name: unreliable source
    data_files:
      - split: train
        path: unreliable source/train*
      - split: val
        path: unreliable source/val*
      - split: test
        path: unreliable source/test*
  - config_name: when
    data_files:
      - split: train
        path: when/train*
      - split: val
        path: when/val*
      - split: test
        path: when/test*
  - config_name: neutrality disputed
    data_files:
      - split: train
        path: neutrality disputed/train*
      - split: val
        path: neutrality disputed/val*
      - split: test
        path: neutrality disputed/test*
  - config_name: verification needed
    data_files:
      - split: train
        path: verification needed/train*
      - split: val
        path: verification needed/val*
      - split: test
        path: verification needed/test*
  - config_name: dead link
    data_files:
      - split: train
        path: dead link/train*
      - split: val
        path: dead link/val*
      - split: test
        path: dead link/test*
  - config_name: not in citation given
    data_files:
      - split: train
        path: not in citation given/train*
      - split: val
        path: not in citation given/val*
      - split: test
        path: not in citation given/test*
  - config_name: needs update
    data_files:
      - split: train
        path: needs update/train*
      - split: val
        path: needs update/val*
      - split: test
        path: needs update/test*
  - config_name: according to whom
    data_files:
      - split: train
        path: according to whom/train*
      - split: val
        path: according to whom/val*
      - split: test
        path: according to whom/test*
  - config_name: original research
    data_files:
      - split: train
        path: original research/train*
      - split: val
        path: original research/val*
      - split: test
        path: original research/test*
  - config_name: pronunciation
    data_files:
      - split: train
        path: pronunciation/train*
      - split: val
        path: pronunciation/val*
      - split: test
        path: pronunciation/test*
  - config_name: by whom
    data_files:
      - split: train
        path: by whom/train*
      - split: val
        path: by whom/val*
      - split: test
        path: by whom/test*
  - config_name: vague
    data_files:
      - split: train
        path: vague/train*
      - split: val
        path: vague/val*
      - split: test
        path: vague/test*
  - config_name: citation needed
    data_files:
      - split: train
        path: citation needed/train*
      - split: val
        path: citation needed/val*
      - split: test
        path: citation needed/test*
  - config_name: who
    data_files:
      - split: train
        path: who/train*
      - split: val
        path: who/val*
      - split: test
        path: who/test*
  - config_name: attribution needed
    data_files:
      - split: train
        path: attribution needed/train*
      - split: val
        path: attribution needed/val*
      - split: test
        path: attribution needed/test*
  - config_name: sic
    data_files:
      - split: train
        path: sic/train*
      - split: val
        path: sic/val*
      - split: test
        path: sic/test*
  - config_name: which
    data_files:
      - split: train
        path: which/train*
      - split: val
        path: which/val*
      - split: test
        path: which/test*
  - config_name: clarification needed
    data_files:
      - split: train
        path: clarification needed/train*
      - split: val
        path: clarification needed/val*
      - split: test
        path: clarification needed/test*
size_categories:
  - 1M<n<10M

Dataset Card for CNN Dailymail Dataset

Dataset Description

Dataset Summary

WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia by Kenichiro Ando, Satoshi Sekine and Mamoru Komachi (AAAI 2024).

The WikiSQE Dataset is an English-language dataset containing over 3.4M sentences in Wikipedia. Dataset's sentences seem to be poor quality in some aspects by Wikipedia editors. The aspects of poor qualities are classified into 153 labels. This repository is a split for experiments used in our paper and contains 5 categories and the top 20 most frequent labels. Each subset contains labeled and unlabeled sentences blended at a 1:1 ratio. Whole dataset is in place https://huggingface.co/datasets/ando55/WikiSQE.

A list of category: ['all', 'citation', 'disputed claim', 'information addition', 'other', 'sac', 'syntactic or semantic revision']

A list of labels: ['according to whom', 'attribution needed', 'by whom', 'citation needed', 'clarification needed', 'dead link', 'disambiguation needed', 'dubious', 'needs update', 'neutrality disputed', 'not in citation given', 'original research', 'pronunciation', 'sic', 'unreliable source', 'vague', 'verification needed', 'when', 'which', 'who']

Data Fields

  • text: a string feature
  • label: a ClassLabel feature (1: labeled sentence, 0: non-labeled sentence)

Label Details and Statistics

See https://github.com/ken-ando/WikiSQE.

Citation Information

@inproceedings{ando-etal-2024-wikisqe,
    title = "WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia",
    author = "Ando, Kenichiro  and
      Sekine, Satoshi  and
      Komachi, Mamoru",
    booktitle = "Proceedings of the AAAI Conference on Artificial Intelligence",
    volume= "38",
    number= "16",
    pages= "17656--17663",
    year= "2024",
    address = "Vancouver, Canada",
    publisher = "Association for the Advancement of Artificial Intelligence",
}