--- license: apache-2.0 task_categories: - text-generation language: - en --- [ClimbMix](https://huggingface.co/datasets/nvidia/ClimbMix) is a high-quality pre-training corpus released by NVIDIA. Here is the description: >ClimbMix is a compact yet powerful 400-billion-token dataset designed for efficient pre-training that delivers superior performance under an equal token budget. It was introduced in [this paper](https://huggingface.co/papers/2504.13161). We proposed a new algorithm to filter and mix the dataset. First, we grouped the data into 1,000 groups based on topic information. Then we applied two classifiers: one to detect advertisements and another to assess the educational value of the text. Each group was scored accordingly, and low-quality data with low scores was removed. Finally, the remaining high-quality groups were mixed using certain weights to generate the final dataset. But it is released in gpt-2 tokens which is not easy-to-use. Therefore,we use gpt-2 tokenizer to detokenize them into raw texts. ⚠️ Please note: This version is not officially released or maintained by NVIDIA. We are not responsible for the content, accuracy, or updates of this dataset. ## Citation: If you find this dataset helpful, please cite the following [paper](https://arxiv.org/abs/2504.13161): ``` @article{diao2025climb, author = {Shizhe Diao and Yu Yang and Yonggan Fu and Xin Dong and Dan Su and Markus Kliegl and Zijia Chen and Peter Belcak and Yoshi Suhara and Hongxu Yin and Mostofa Patwary and Celine Lin and Jan Kautz and Pavlo Molchanov}, title={CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training}, journal = {arXiv preprint}, year = {2025}, archivePrefix = {arXiv}, primaryClass = {cs.CL}, url={https://arxiv.org/abs/2504.13161}, } ```