RP-MIX
Collection
ะ ะ ะผะธะบัั ะฐ.ะบ.ะฐ ะฟะพััะณะธ ะฒ ััััะบะธะน ัะฟ experience
โข
3 items
โข
Updated
โข
4
https://huggingface.co/mradermacher/SAINEMO-reMIX-GGUF
https://huggingface.co/mradermacher/SAINEMO-reMIX-i1-GGUF
The given presets are quite suitable for this model. https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized/Mistral%20Improved
Temp - 0,7 - 1,2 ~
TopA - 0,1
DRY - 0,8 1,75 2 0
I recommend trying the stock presets from SillyTavern, such as simple-1.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the della_linear merge method using E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b
parameters:
weight: 0.55 # ะัะฝะพะฒะฝะพะน ะฐะบัะตะฝั ะฝะฐ ััััะบะพะผ ัะทัะบะต
density: 0.4
- model: E:\Programs\TextGen\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
parameters:
weight: 0.2 # ะ ะ ะผะพะดะตะปั, ัััั ะผะตะฝััะธะน ะฒะตั ะธะท-ะทะฐ ะพัะธะตะฝัะฐัะธะธ ะฝะฐ ะฐะฝะณะปะธะนัะบะธะน
density: 0.4
- model: E:\Programs\TextGen\text-generation-webui\models\elinas_Chronos-Gold-12B-1.0
parameters:
weight: 0.15 # ะัะพัะฐั ะ ะ ะผะพะดะตะปั
density: 0.4
- model: E:\Programs\TextGen\text-generation-webui\models\Vikhrmodels_Vikhr-Nemo-12B-Instruct-R-21-09-24
parameters:
weight: 0.25 # ะ ัััะบะพัะทััะฝะฐั ะฟะพะดะดะตัะถะบะฐ ะธ ะฑะฐะปะฐะฝั
density: 0.4
merge_method: della_linear
base_model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b
parameters:
epsilon: 0.05
lambda: 1
dtype: float16
tokenizer_source: base