CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training Paper • 2504.13161 • Published 21 days ago • 88
Running 110 110 TxT360: Trillion Extracted Text 📖 Create a large, deduplicated dataset for LLM pre-training
Running 2.56k 2.56k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
Running 110 110 TxT360: Trillion Extracted Text 📖 Create a large, deduplicated dataset for LLM pre-training
Running 63 63 Scaling FineWeb to 1000+ languages: Step 1: finding signal in 100s of evaluation tasks 📝 Evaluate multilingual models using FineTasks
Running 936 936 FineWeb: decanting the web for the finest text data at scale 🍷 Generate high-quality web text data for LLM training
Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler Paper • 2408.13359 • Published Aug 23, 2024 • 25