metadata
base_model:
- Sao10K/Llama-3.3-70B-Vulpecula-r1
- SicariusSicariiStuff/Negative_LLAMA_70B
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- TareksLab/L33R1-BASE-70B
- Sao10K/L3.3-70B-Euryale-v2.3
library_name: transformers
tags:
- mergekit
- merge
MERGE3
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using TareksLab/L33R1-BASE-70B as a base.
Models Merged
The following models were included in the merge:
- Sao10K/Llama-3.3-70B-Vulpecula-r1
- SicariusSicariiStuff/Negative_LLAMA_70B
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- Sao10K/L3.3-70B-Euryale-v2.3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/L3.3-70B-Euryale-v2.3
parameters:
weight: 0.20
density: 0.5
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.5
- model: Sao10K/Llama-3.3-70B-Vulpecula-r1
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/L33R1-BASE-70B
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: TareksLab/L33R1-BASE-70B
parameters:
normalize: false
int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: Sao10K/Llama-3.3-70B-Vulpecula-r1
pad_to_multiple_of: 8