SelfLong-Llama3.1-8B-Instruct-1M

Wang, Liang, Nan Yang, Xingxing Zhang, Xiaolong Huang, and Furu Wei. "Bootstrap Your Own Context Length." arXiv preprint arXiv:2412.18860 (2024).

Overview

The SelfLong series of Large Language Models (LLMs) are designed to handle extremely long contexts, reaching up to 1 million tokens. These models, with parameter sizes of 1B, 3B, and 8B, are initialized from the Llama-3.2 and Llama-3.1 architectures.

Performance (RULER-1M)

The following table presents the results of the SelfLong models on the RULER-1M benchmark. The numbers represent the RULER score averaged over 13 tasks at different support lengths.

Model Support Length 32k 64k 128k 256k 512k 1M
Llama-3.2-1B-Instruct 128k 64.7 43.1 0.0 - - -
Llama-3.2-3B-Instruct 128k 77.8 70.4 0.8 - - -
Llama-3.1-8B-Instruct 128k 89.8 85.4 78.5 - - -
gradientai/Llama-3-8B-Instruct-Gradient-1048k 1M 81.8 78.6 77.2 74.2 70.3 64.3
SelfLong-1B-1M 1M 61.3 56.6 54.7 46.7 40.7 31.1
SelfLong-3B-1M 1M 80.5 78.0 75.5 68.8 58.5 38.8
SelfLong-8B-1M 1M 89.5 84.0 82.0 79.7 78.2 69.6

Note:

  • Bold indicates the best performance.
  • Underline indicates the second-best performance.
  • - indicates that the model does not support the given context length.

Evaluation on RULER-1M Dataset

To evaluate the SelfLong models on the RULER-1M dataset, you can follow these steps:

  1. Start vllm server:
PROC_PER_NODE=$(nvidia-smi --list-gpus | wc -l)
# Reduce this number if you have limited GPU memory
MAX_MODEL_LEN=1048576
MODEL_NAME_OR_PATH="self-long/SelfLong-Llama3.1-8B-Instruct-1M"

echo "Starting VLLM server..."
vllm serve "${MODEL_NAME_OR_PATH}" \
   --dtype auto \
   --disable-log-stats --disable-log-requests --disable-custom-all-reduce \
   --enable_chunked_prefill --max_num_batched_tokens 8192 \
   --tensor-parallel-size "${PROC_PER_NODE}" \
   --max-model-len "${MAX_MODEL_LEN}" \
   --gpu_memory_utilization 0.9 \
   --api-key token-123 &
  1. Get Completions
from openai import OpenAI
from datasets import load_dataset

client = OpenAI(
    base_url="http://localhost:8000/v1",  # Default vLLM server address
    api_key="token-123"
)

ds = load_dataset('self-long/RULER-llama3-1M', f'niah_single_1_4k', split='validation')
prompt = ds[0]['input']

completion = client.completions.create(
    model='self-long/SelfLong-Llama3.1-8B-Instruct-1M',
    prompt=prompt,
    max_tokens=100,
)

print(prompt)
print(completion.choices[0].text)
  1. For evaluation, please refer to the evaluation script provided in the RULER repository: https://github.com/NVIDIA/RULER/blob/main/scripts/eval/evaluate.py.

Note that different vLLM and Torch versions may produce slightly different decoding results.

References

@article{wang2024bootstrap,
  title={Bootstrap Your Own Context Length},
  author={Wang, Liang and Yang, Nan and Zhang, Xingxing and Huang, Xiaolong and Wei, Furu},
  journal={arXiv preprint arXiv:2412.18860},
  year={2024}
}
Downloads last month
5
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.