--- license: mit pipeline_tag: text-generation library_name: transformers language: [ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si', 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu', ] datasets: # core - base - ontocord/fineweb-permissive-multilingual-2m - distily/c4_multilingual_1M - data-silence/sumnews - xu-song/cc100-samples - badrex/llm-emoji-dataset - fblgit/simple-math - Gusarich/math-expressions-1m - neuralwork/arxiver - christopher/rosetta-code - nampdn-ai/tiny-codes - JeanKaddour/minipile # core - instruct - NousResearch/hermes-function-calling-v1 - simplescaling/s1K-1.1 # base - instruct - mlabonne/open-perfectblend - allenai/tulu-3-sft-mixture - rombodawg/Everything_Instruct_Multilingual # base - reason - open-r1/OpenR1-Math-220k - open-thoughts/OpenThoughts-114k - cognitivecomputations/dolphin-r1 - simplescaling/s1K-1.1 tags: - chat - core - base - instruct - reason --- # tangled-alpha-0.10-core ![logo](./misc/logo.jpg) ```bash time python -B prepare_core_datasets.py ``` ``` i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=10913927, len(dataset) * block_size=11186775175 Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 11186775175 i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=893465, len(dataset) * block_size=1830709785 Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 1830709785 i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=375104, len(dataset) * block_size=1536801088 Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 1536801088 i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=177522, len(dataset) * block_size=1454437746 Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 1454437746 i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=77725, len(dataset) * block_size=1273524125 Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 1273524125 i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=22931, len(dataset) * block_size=751425939 Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 751425939 i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=4988, len(dataset) * block_size=326898556 Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 326898556 i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1137, len(dataset) * block_size=149030001 Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 149030001 42G ../core-data-0-0-1073741824-1025-16000 6.9G ../core-data-1-1025-2049-2049-8000 5.8G ../core-data-2-2049-4097-4097-4000 5.5G ../core-data-3-4097-8193-8193-2000 4.8G ../core-data-4-8193-16385-16385-1000 2.9G ../core-data-5-16385-32769-32769-500 1.3G ../core-data-6-32769-65537-65537-250 573M ../core-data-7-65537-131073-131073-125 ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml ``` ``` Seed set to 23 Time to instantiate model: 0.21 seconds. Total parameters: 402,703,104 Verifying settings ... Measured TFLOPs: 42432.35 Epoch 1 | iter 64 step 1 | loss train: 11.984, val: n/a | iter time: 460.76 ms (step) remaining time: 12 days, 3:41:55 Epoch 1 | iter 128 step 2 | loss train: 11.979, val: n/a | iter time: 402.83 ms (step) remaining time: 9 days, 0:57:24 Epoch 1 | iter 192 step 3 | loss train: 11.983, val: n/a | iter time: 403.46 ms (step) remaining time: 8 days, 0:12:58 Epoch 1 | iter 256 step 4 | loss train: 11.983, val: n/a | iter time: 403.39 ms (step) remaining time: 7 days, 11:52:07 Epoch 1 | iter 320 step 5 | loss train: 11.979, val: n/a | iter time: 403.85 ms (step) remaining time: 7 days, 4:28:33 Epoch 1 | iter 384 step 6 | loss train: 11.978, val: n/a | iter time: 403.93 ms (step) remaining time: 6 days, 23:33:15 Epoch 1 | iter 448 step 7 | loss train: 11.978, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 20:02:28 Epoch 1 | iter 512 step 8 | loss train: 11.973, val: n/a | iter time: 403.80 ms (step) remaining time: 6 days, 17:24:49 Epoch 1 | iter 576 step 9 | loss train: 11.972, val: n/a | iter time: 403.23 ms (step) remaining time: 6 days, 15:21:59 Epoch 1 | iter 640 step 10 | loss train: 11.967, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 13:43:53 # ... ``` Backup `wandb`: ```bash mv wandb wandb-pretrain-core-0 ``` Copy config: ```bash cp ../config-0.json ../out/pretrain-core-0/final/config.json ``` Chat with model: ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final' ``` ``` ```