Datasets:

License:
LongCorpus-2.5B / README.md
Guanzheng's picture
Update README.md
9fb64fa verified
metadata
license: mit
task_categories:
  - text-generation
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train_*
      - split: test
        path: data/test_*

We collect a 2.5B training dataset from various domains for long-context continual pre-training. The composition of this dataset is as follows (partially inspired by Long-Data-Collection):

Domain Proportion Source
Book 40% Redpajama-Book
Arxiv 20% Redpajama-Arxiv
General 20% Redpajama
Code 10% LCC-Python
QA 5% Natural Questions
Summarization 5% BookSum

We have also curated a test dataset comprising 250 million tokens, mirroring the same composition. The selection criteria ensured that the average n-gram similarity (for n=2, 3, 4) with the training set is below 10%. This threshold effectively excludes all QA and Summarization data, resulting in a test corpus where the distribution of tokens across Book, Arxiv, General, and Code categories follows a ratio of 4:2:2:1, respectively.