Gholamreza commited on
Commit
9b5ace6
·
1 Parent(s): f1c12d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -68,14 +68,13 @@ dataset_info:
68
  ---
69
 
70
  # Dataset Card for "pquad"
71
- ## PQuAD
72
 
73
- *THIS IS A NON-OFFICIAL VERSION OF THE DATASET UPLOADED TO HUGGINGFACE BY [Gholamreza Dar](https://huggingface.co/Gholamreza)*
74
- The original repository for the dataset is https://github.com/AUT-NLP/PQuAD
75
 
76
  Original README.md:
77
 
78
-
79
  PQuAD is a crowd- sourced reading comprehension dataset on Persian Language. It includes 80,000
80
  questions along with their answers, with 25% of the questions being unanswerable. As a reading
81
  comprehension dataset, it requires a system to read a passage and then answer the given questions
@@ -83,6 +82,7 @@ from the passage. PQuAD's questions are based on Persian Wikipedia articles and
83
  variety of subjects. Articles used for question generation are quality checked and include few
84
  number of non-Persian words.
85
 
 
86
  The dataset is divided into three categories including train, validation, and test sets and the
87
  statistics of these sets are as follows:
88
 
@@ -108,6 +108,7 @@ PQuAD is stored in the JSON format and consists of passages where each passage i
108
  set of questions. Answer(s) of the questions is specified with answer's span (start and end
109
  point of answer in paragraph). Also, the unanswerable questions are marked as unanswerable.
110
 
 
111
  The estimated human performance on the test set is 88.3% for F1 and 80.3% for EM. We have
112
  evaluated PQuAD using two pre-trained transformer-based language models, namely ParsBERT
113
  (Farahani et al., 2021) and XLM-RoBERTa (Conneau et al., 2020), as well as BiDAF (Levy et
@@ -124,6 +125,7 @@ al., 2017) which is an attention-based model proposed for MRC.
124
  +-------------+------+------+-----------+-----------+-------------+
125
  ```
126
 
 
127
  PQuAD is developed by Mabna Intelligent Computing at Amirkabir Science and Technology Park with
128
  collaboration of the NLP lab of the Amirkabir University of Technology and is supported by the
129
  Vice Presidency for Scientific and Technology. By releasing this dataset, we aim to ease research
 
68
  ---
69
 
70
  # Dataset Card for "pquad"
71
+ ## PQuAD Description
72
 
73
+ **THIS IS A NON-OFFICIAL VERSION OF THE DATASET UPLOADED TO HUGGINGFACE BY [Gholamreza Dar](https://huggingface.co/Gholamreza)**
74
+ *The original repository for the dataset is https://github.com/AUT-NLP/PQuAD*
75
 
76
  Original README.md:
77
 
 
78
  PQuAD is a crowd- sourced reading comprehension dataset on Persian Language. It includes 80,000
79
  questions along with their answers, with 25% of the questions being unanswerable. As a reading
80
  comprehension dataset, it requires a system to read a passage and then answer the given questions
 
82
  variety of subjects. Articles used for question generation are quality checked and include few
83
  number of non-Persian words.
84
 
85
+ ## Dataset Splits
86
  The dataset is divided into three categories including train, validation, and test sets and the
87
  statistics of these sets are as follows:
88
 
 
108
  set of questions. Answer(s) of the questions is specified with answer's span (start and end
109
  point of answer in paragraph). Also, the unanswerable questions are marked as unanswerable.
110
 
111
+ ##Results
112
  The estimated human performance on the test set is 88.3% for F1 and 80.3% for EM. We have
113
  evaluated PQuAD using two pre-trained transformer-based language models, namely ParsBERT
114
  (Farahani et al., 2021) and XLM-RoBERTa (Conneau et al., 2020), as well as BiDAF (Levy et
 
125
  +-------------+------+------+-----------+-----------+-------------+
126
  ```
127
 
128
+ ##LICENSE
129
  PQuAD is developed by Mabna Intelligent Computing at Amirkabir Science and Technology Park with
130
  collaboration of the NLP lab of the Amirkabir University of Technology and is supported by the
131
  Vice Presidency for Scientific and Technology. By releasing this dataset, we aim to ease research