Delta-Vector commited on
Commit
bcffd64
·
verified ·
1 Parent(s): 40c976d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - stories
4
+ - human
5
+ - not-for-all-audiences
6
+ - roleplay
7
+ pretty_name: Orion
8
+ size_categories:
9
+ - 10K<n<100K
10
+ ---
11
+ This is a cleaned subset of Nyx's Asstr dataset sourced from https://asstr.info/asstr-colleciton, a collection of 16K NSFW,NSFL,SFW stories for completion training.
12
+
13
+
14
+ # Cleaning processs
15
+ 1. Pruning Unnecessary Fields (1.py):
16
+ - The initial script `1.py` prunes the JSON records to include only the fields "id", "title", and "content".
17
+ 2. Language Filtering (2.py):
18
+ - The script 2.py filters the dataset to keep only records with English content using the langdetect lib
19
+ 3. Tokenization and Length Filtering (3.py):
20
+ - The script 3.py uses HF tokenizers to tokenize the "content" field using a specific model (e.g., "microsoft/phi-4").
21
+ - It filters out records that exceed a predefined maximum token limit (e.g., 16384)
22
+ 4. Deduplication (4.py):
23
+ - The script `4.py` deduplicates the dataset based on the "content" field.
24
+ - It ensures that each record with a unique "content" value is retained.
25
+ 5. Fuzzy Deduplication (5.py):
26
+ - The script `5.py` performs fuzzy deduplication using the rapidfuzz lib
27
+ - It checks for similar "content" values within a certain threshold (e.g., 85% similarity) and removes duplicates.
28
+ 6.*Content Rating (6.py):
29
+ - The script 6.py sends each "content" for evaluation based on its writing quality using Supernova-Medius from 1-5 (Though Medius ended up rating a few stories as 6???)
30
+ - Ratings were cut short due the evals taking too long (5~ Days), I ended up with a 35K subset of which 16K stories were extracted from. Although I plan to perform a larger subset in the future.
31
+ 7. Filtering Based on Rating (Extract.py):
32
+ - The script `Extract.py` filters the rated JSON file to retain records with specific rating criteria (e.g., 4 to 6).
33
+
34
+ ---
35
+
36
+ Further cleaning may be required but this is the most I can do as of now.
37
+
38
+ Thank you to Kubernetes_bad for providing the compute to filter this entire thing, and thanks to LucyKnada for giving me the motivation to do anything with it aswell as Pocketdoc for pointing out the duplicate issues in the OG set
39
+
40
+ You can find the deduped massive subset Here -> https://huggingface.co/datasets/NewEden/Orion-Unfiltered/
41
+
42
+ You can find the entire rated subset here -> https://huggingface.co/datasets/NewEden/Orion-Rated
43
+
44
+ And this repo holds the extracted stories from the entire rated subset aswell as the scripts I used.