File size: 1,356 Bytes
33ce6f7 8a117c6 a52aeed 8a117c6 5f89a1b 71ad919 33ce6f7 71ad919 5f89a1b 33ce6f7 71ad919 33ce6f7 a52aeed 38e8f2b 9b52f59 a55d494 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
language:
- en
---
LoRA that aims to improve vividity of generated scenarios.
Will produce NSFW output!!
Basically, a lewded, deshivered, and hopefully better keeping to the scene version of Mistral.
Mistral preset in ST will produce replies of different lengths, better adhering to the situation. I recommend it over the Roleplay preset, which almost always will fill up the response length and, in my opinion, is more dry.
Extra stopping strings for Mistral preset: ["[", "### Scenario:", "[ End ]", "#", "User:", "INS", "{{user}}:", "IST"]
Very important! Make the first three message pairs as good as possible, and the ride will become smoother after that.
I would also advise against using asterisks at all. The model will mess them up eventually, and although not critical to performance, it might get annoying later.
Adapter was trained in three phases. First and largest phase - many diverse conversations. Second and third phases - lewding content.
<img src="https://files.catbox.moe/aji8qj.png">
vivid[email protected]
Please send feedback, logs, your favourite settings or virtualy anything you wish to share.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |