File size: 1,722 Bytes
f0f44f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f50ba57
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
- ehartford/dolphin
---


# Airophin: A NTK-by-Parts RoPE Scaled QLoRA Fine-tune of Llama-2-13b (LoRA weights)


<!-- LoRA Weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-LoRA -->
GPTQ weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-GPTQ
fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-fp16

## Overview

This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
1.  It is first trained on a long-context (7000-8192 tokens) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset (GPT4 split only). This amounts to roughly 110mm tokens. Airoboros-like training prompt was used instead of the dolphin system prompt. Training was done with partial NTK scaling applied (scale factor of 4). This took ~20 hours.
2.  The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1), with same scaling approach, for 2 epochs. This took ~15 hours.

**This is a QLoRA fine-tune (rank 64)**. 

All training was performed with 1x RTX 6000 Ada.

For full model card, including how to use PNTK, see any of the two merged models linked above.

**IMPORTANT** There are TWO sets of adapter weights. `adapter_model_base` is to be applied to llama-2-13b. The result can then be merged with `adapter_model`. These adapters correspond to the first and second training phase respectively. Applying the second phase adapter to base llama-2-13b will produce different results to what the merged full airophin model.