--- license: apache-2.0 task_categories: - text-generation --- # Prolong_64K_v2_Llama2_Tokenizer This is the Prolong_64K dataset, tokenized using the [Llama-2-7b-hf tokenizer](https://github.com/microsoft/Samba/blob/main/scripts/prepare_slimpajama.py#L22) for use in Samba-style training. This dataset was used in the research paper: [Rethinking Language Model Scaling under Transferable Hypersphere Optimization](https://huggingface.co/papers/2603.28743). The official training codebase can be found at [GitHub - microsoft/ArchScale](https://github.com/microsoft/ArchScale). ## Download 👉 You can download and unzip the dataset from: [prolong_64K_v2.zip](https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/blob/main/prolong_64K_v2.zip) ```bash wget -c https://huggingface.co/datasets/jsun/Prolong_64K_v2_Llama2_Tokenizer/resolve/main/prolong_64K_v2.zip -O prolong_64K_v2.zip sudo apt install zip # Ubuntu unzip prolong_64K_v2.zip -d prolong_64K_v2 ``` ## Usage Once extracted, the dataset can be loaded using the [PackedDataset](https://github.com/microsoft/Samba/blob/383c016f2fb20ce75eed777761e1a4456c87b2b0/lit_gpt/packed_dataset.py#L33) class from the Samba/ArchScale codebase. Example training scripts utilizing this data format are provided in the [ArchScale repository](https://github.com/microsoft/ArchScale).