File size: 708 Bytes
a1c5705
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
This is a model trained in four stages:
Base Model -- 1 Gig of semi-structured pretraining data:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/hpdbVRrM1yt65-gNtRIfT.png)
- Base pretraining phase 1 (Constant LR, text completion -- 20,000 steps 2/3 epoch)
- Base pretraining phase 2 (Cosine LR, text completion -- 10,000 steps 1/3 epoch)


Merge LORA into instruct model -- 100 MB of structured story-instruct data:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/V1Jf07k8JdI0_OzIDc7FF.png)
- Story-instruct tune phase 1 (Constant LR, ~1250 steps, 1 epoch)
- Story-instruct tune phase 2 (Cosine LR, ~1250 steps, 1 epoch)