dainis-boumber commited on
Commit
45bc599
·
1 Parent(s): 97949b0
Files changed (1) hide show
  1. README.md +20 -94
README.md CHANGED
@@ -112,7 +112,7 @@ fraction of texts that are meant to deceive the person reading them one way or a
112
 
113
  Each subdirectory/config contains the domain/individual dataset split into three files:
114
 
115
- `train.jsonl`, `test.jsonl`, and `valid.jsonl`
116
 
117
  that contain train, test, and validation sets, respectively.
118
 
@@ -136,6 +136,15 @@ It is guaranteed to be valid unicode, less than 1 million characters, and contai
136
  `label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no,
137
  the text is not deceptive (it is truthful).
138
 
 
 
 
 
 
 
 
 
 
139
  ### Layout
140
 
141
  The directory layout of gdds is like so:
@@ -229,17 +238,8 @@ Fake News used WELFake as a basis. The WELFake dataset combines 72,134 news arti
229
  (Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of
230
  often reputable sources, such as "[claim] (Reuters)". It contains 35,028 real news articles and 37,106 fake news articles.
231
  We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
232
- problem for transfer learning as well as classification. After cleaning and processing, the Fake News dataset consists of
233
- 20456 articles; 8832 are deceptive, and 11624 are not.
234
 
235
- #### Cleaning
236
-
237
- Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries,
238
- entries of length less than 2 characters or exceeding 1000000 characters were all removed.
239
-
240
- #### Preprocessing
241
-
242
- Whitespace, quotes, bulletpoints, unicode is normalized.
243
 
244
  #### Data
245
 
@@ -260,45 +260,19 @@ The original Job Labels dataset had the labels inverted when released. The probl
260
 
261
  #### Cleaning
262
 
263
- It was cleaned by removing all HTML tags, empty descriptions, and duplicates. The dataset has been cleaned using Cleanlab.
264
- Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less
265
- than 2 characters or exceeding 1000000 characters were all removed.
266
- The final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.
267
-
268
- #### Preprocessing
269
-
270
- Whitespace, quotes, bulletpoints, unicode is normalized.
271
 
272
  #### Data
273
 
274
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
275
-
276
- There are 14295 samples in the dataset, contained in `job_scams.jsonl`.
277
- For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio.
278
- They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified.
279
- The training set contains 11436 samples, the validation and the test sets have 1429 and 1430 samles, respectively.
280
 
281
  ### PHISHING
282
 
283
  This dataset consists of various phishing attacks as well as benign emails collected from real users.
284
 
285
- #### Cleaning
286
-
287
- Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries,
288
- duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
289
-
290
- #### Preprocessing
291
-
292
- Whitespace, quotes, bulletpoints, unicode is normalized.
293
-
294
  #### Data
295
 
296
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
297
-
298
- There are 15272 samples in the dataset, contained in `phishing.jsonl`.
299
- For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio.
300
- They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified.
301
- The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samles, respectively.
302
 
303
  ### POLITICAL STATEMENTS
304
 
@@ -336,41 +310,19 @@ now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
336
  The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal",
337
  "On inflation", were removed. Text with large number of errors induced by a parser were also removed.
338
  Statements in language other than English (namely, Spanish) were also removed.
339
- Sequences with unicode errors, containing less than one characters or over 1 million characters were removed.
340
-
341
- #### Preprocessing
342
-
343
- Whitespace, quotes, bulletpoints, unicode is normalized.
344
 
345
  #### Data
346
 
347
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
348
-
349
- There are 12497 samples in the dataset, contained in `political_statements.jsonl`.
350
- For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio.
351
- They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified.
352
- The training set contains 9997 samples, the validation and the test sets have 1250 samles each in them.
353
 
354
  ### PRODUCT REVIEWS
355
 
356
- We post-process and split Product Reviews dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0
357
-
358
- #### Cleaning
359
-
360
- Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
361
-
362
- #### Preprocessing
363
-
364
- Whitespace, quotes, bulletpoints, unicode is normalized.
365
 
366
  #### Data
367
 
368
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
369
-
370
- There are 20971 samples in the dataset, contained in `product_reviews.jsonl`.
371
- For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio.
372
- They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified.
373
- The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samles, respectively.
374
 
375
  ### SMS
376
 
@@ -379,22 +331,9 @@ which contained 5,574 and 5,971 real English SMS messages, respectively. As thes
379
  the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
380
  and the remaining 5300 are not.
381
 
382
- #### Cleaning
383
-
384
- Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries,
385
- duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
386
-
387
- #### Preprocessing
388
-
389
- Whitespace, quotes, bulletpoints, unicode is normalized.
390
-
391
  #### Data
392
 
393
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
394
-
395
- There are 6574 samples in the dataset, contained in `sms.jsonl`. For reproduceability, the data is also split into training,
396
- test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`.
397
- The sampling process was stratified. The training set contains 5259 samples, the validation and the test sets have 657 and 658 samles,
398
  respectively.
399
 
400
  ### TWITTER RUMOURS
@@ -406,22 +345,9 @@ https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4
406
  was used in creation of this dataset. We took source tweets only, and ignored replies to them.
407
  We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
408
 
409
- #### Cleaning
410
-
411
- The dataset has been cleaned using cleanlab with visual inspection of problems found. No issues were identified.
412
- Duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were removed.
413
-
414
- #### Preprocessing
415
-
416
- Whitespace, quotes, bulletpoints, unicode is normalized.
417
-
418
  #### Data
419
 
420
- The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
421
-
422
- There are 5789 samples in the dataset, contained in `tweeter_rumours.jsonl`. For reproduceability, the data is also split into training,
423
- test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`.
424
- The sampling process was stratified. The training set contains 4631 samples, the validation and the test sets have 579 samles each.
425
 
426
 
427
 
 
112
 
113
  Each subdirectory/config contains the domain/individual dataset split into three files:
114
 
115
+ `train.jsonl`, `test.jsonl`, and `validation.jsonl`
116
 
117
  that contain train, test, and validation sets, respectively.
118
 
 
136
  `label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no,
137
  the text is not deceptive (it is truthful).
138
 
139
+ ### Processing and Cleaning
140
+
141
+ Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries,
142
+ entries of length less than 2 characters or exceeding 1000000 characters were all removed.
143
+
144
+ Labels were manually curated and corrected in cases of clear error.
145
+
146
+ Whitespace, quotes, bulletpoints, unicode is normalized.
147
+
148
  ### Layout
149
 
150
  The directory layout of gdds is like so:
 
238
  (Kaggle, McIntire, Reuters, and BuzzFeed Political). The dataset was cleaned of data leaks in the form of citations of
239
  often reputable sources, such as "[claim] (Reuters)". It contains 35,028 real news articles and 37,106 fake news articles.
240
  We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
241
+ problem for transfer learning as well as classification.
 
242
 
 
 
 
 
 
 
 
 
243
 
244
  #### Data
245
 
 
260
 
261
  #### Cleaning
262
 
263
+ HTML tags were removed.
 
 
 
 
 
 
 
264
 
265
  #### Data
266
 
267
+ T**With just under 600 deceptive texts, this dataset is heavily imbalanced.**
 
 
 
 
 
268
 
269
  ### PHISHING
270
 
271
  This dataset consists of various phishing attacks as well as benign emails collected from real users.
272
 
 
 
 
 
 
 
 
 
 
273
  #### Data
274
 
275
+ The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.
 
 
 
 
 
276
 
277
  ### POLITICAL STATEMENTS
278
 
 
310
  The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal",
311
  "On inflation", were removed. Text with large number of errors induced by a parser were also removed.
312
  Statements in language other than English (namely, Spanish) were also removed.
 
 
 
 
 
313
 
314
  #### Data
315
 
316
+ The training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.
 
 
 
 
 
317
 
318
  ### PRODUCT REVIEWS
319
 
320
+ We post-process and split Product Reviews dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours
321
+ as they all go into form GDDS-2.0
 
 
 
 
 
 
 
322
 
323
  #### Data
324
 
325
+ The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.
 
 
 
 
 
326
 
327
  ### SMS
328
 
 
331
  the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
332
  and the remaining 5300 are not.
333
 
 
 
 
 
 
 
 
 
 
334
  #### Data
335
 
336
+ The training set contains 5259 samples, the validation and the test sets have 657 and 658 samples,
 
 
 
 
337
  respectively.
338
 
339
  ### TWITTER RUMOURS
 
345
  was used in creation of this dataset. We took source tweets only, and ignored replies to them.
346
  We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
347
 
 
 
 
 
 
 
 
 
 
348
  #### Data
349
 
350
+ The training set contains 4631 samples, the validation and the test sets have 579 samples each.
 
 
 
 
351
 
352
 
353