SrikrishnaIyer commited on
Commit
bf08013
·
verified ·
1 Parent(s): 4d578a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -3
README.md CHANGED
@@ -1,3 +1,79 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Dataset Preprocessing for 10M and 100M Text-Only Tracks
5
+
6
+ ## Overview
7
+
8
+ This document describes the preprocessing steps applied to the datasets used for the 10M and 100M text-only tracks. The datasets are a mixture of 10 different corpora, as shown in Table 1 below.
9
+
10
+ ## Table 1: Dataset Contents
11
+
12
+ | Dataset | Domain | # Words STRICT-SMALL | # Words STRICT | Proportion |
13
+ |---------|--------|----------------------|----------------|------------|
14
+ | CHILDES (MacWhinney, 2000) | Child-directed speech | 0.44M | 4.21M | 5% |
15
+ | British National Corpus (BNC), dialogue portion | Dialogue | 0.86M | 8.16M | 8% |
16
+ | Children's Book Test (Hill et al., 2016) | Children's books | 0.57M | 5.55M | 6% |
17
+ | Children's Stories Text Corpus | Children's books | 0.34M | 3.22M | 3% |
18
+ | Standardized Project Gutenberg Corpus (Gerlach and Font-Clos, 2018) | Written English | 0.99M | 9.46M | 10% |
19
+ | OpenSubtitles (Lison and Tiedemann, 2016) | Movie subtitles | 3.09M | 31.28M | 31% |
20
+ | QCRI Educational Domain Corpus (QED; Abdelali et al., 2014) | Educational video subtitles | 1.04M | 10.24M | 11% |
21
+ | Wikipedia | Wikipedia (English) | 0.99M | 10.08M | 10% |
22
+ | Simple Wikipedia | Wikipedia (Simple English) | 1.52M | 14.66M | 15% |
23
+ | Switchboard Dialog Act Corpus (Stolcke et al., 2000) | Dialogue | 0.12M | 1.18M | 1% |
24
+ | Total | - | 9.96M | 98.04M | 100% |
25
+
26
+ *Table 1: The contents of datasets for the 10M and 100M tracks; the table is taken from (Warstadt et al., 2023)*
27
+
28
+ ## Preprocessing Steps
29
+
30
+ The same set of preprocessing steps are applied as that of (David et al., 2023). Light preprocessing and normalization were applied to these corpora to cast them into a unified format. The following modifications were made:
31
+
32
+ 1. **CHILDES**:
33
+ - Capitalized the first letter of each line
34
+ - Normalized punctuation with whitespaces (detokenization)
35
+ - Put every line between double quotes (as directed speech)
36
+
37
+ 2. **British National Corpus**:
38
+ - Applied capitalization, normalization, and double quotes
39
+
40
+ 3. **Children's Book Test**:
41
+ - Normalized all unnatural symbols and whitespaces
42
+ - Replaced Penn Tree format tokens (e.g., -LRB-, -RRB-) with their corresponding symbols ('(', ')')
43
+
44
+ 4. **Children's Stories Text Corpus**:
45
+ - Conserved formatting with a special [TAB] symbol
46
+ - Applied whitespace normalization
47
+
48
+ 5. **Standardized Project Gutenberg Corpus**:
49
+ - Restored original paragraphs by removing additional newline symbols
50
+ - Applied whitespace normalization
51
+
52
+ 6. **OpenSubtitles**:
53
+ - Removed leading dash symbols
54
+ - Applied whitespace normalization
55
+ - Cast every line as directed speech with double quotes
56
+
57
+ 7. **QED**:
58
+ - Cleaned up incorrectly parsed HTML symbols using simple heuristics
59
+ - Applied whitespace normalization
60
+ - Cast every line as directed speech with double quotes
61
+
62
+ 8. **Wikipedia**:
63
+ - Cleaned incorrectly parsed Wikipedia tags and hyperlinks
64
+ - Applied whitespace normalization
65
+
66
+ 9. **Simple Wikipedia**:
67
+ - Applied heuristic HTML clean-up
68
+ - Applied whitespace normalization
69
+
70
+ 10. **Switchboard**:
71
+ - Removed leading dashes
72
+ - Applied whitespace normalization
73
+ - Added double quotes
74
+
75
+ ## References
76
+
77
+ Samuel, David. "Mean BERTs make erratic language teachers: the effectiveness of latent bootstrapping in low-resource settings." *arXiv preprint arXiv:2310.19420* (2023).
78
+
79
+ Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Gotlieb Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Adina Williams, Bhargavi Paranjabe, Tal Linzen, and Ryan Cotterell. 2023b. Findings of the 2023 BabyLM Challenge: Sample-efficient pretraining on developmentally plausible corpora. In *Proceedings of the 2023 BabyLM Challenge*. Association for Computational Linguistics (ACL).