--- dataset_info: features: - name: input_ids sequence: int32 splits: - name: train num_bytes: 5836120400.0 num_examples: 1423444 download_size: 2517314986 dataset_size: 5836120400.0 configs: - config_name: default data_files: - split: train path: data/train-* --- Pre-tokenized dataset of the first 1 million lines of [monology/pile-uncopyrighted](https://huggingface.co/datasets/monology/pile-uncopyrighted), tokenized for Gemma-2 using SAELens. This dataset has 1024 context size and about 1.5B tokens.