File size: 1,232 Bytes
775de30 ac172bb 84f1dc0 5e611ab 84f1dc0 a7da3c1 ac172bb 84f1dc0 a7da3c1 84f1dc0 ac172bb 3c96af7 ac172bb 84f1dc0 ac172bb 6ea3009 a7da3c1 6ea3009 a7da3c1 ac172bb a7da3c1 ac172bb a7da3c1 6ea3009 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
datasets:
- PygmalionAI/PIPPA
- ludis/geepeetee4
- lemonilia/LimaRP
---
## GGUF
gguf quants for ludis/tsukasa-13b-qlora-limarp
## Prompting
https://rentry.org/tsukasa13b - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (mistral-0.1-7b)
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a40 gpu cluster.
the a40 GPU cluster has been graciously provided by [Arc Compute](https://www.arccompute.io/).
rank 8 lora tune of mistralai/Mistral-7B-v0.1, first tuned on koishi commit 6e675d1 for one epoch then on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-30 for 2 epochs in metharme format
|