merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear merge method.
Models Merged
The following models were included in the merge:
- SaisExperiments/GemmOwO-2B + kweinmeister/gemma-2-2b-it-dolly-15k
- bunnycore/Gemma-2-2b-TitanFusion
- benjamin/Gemma2-2B-Distilled-Math
- IlyaGusev/gemma-2-2b-it-abliterated + monsterapi/gemma-2-2b-norobots
- Arnic/Gemma-2-2b-it-chat-medicare
- zli12321/prometheus2-2B
- lmassaron/gemma-2-2b-it-grpo-gsm8k
- minchyeom/ThinkerGemma
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Gemma-2-2b-TitanFusion
parameters:
weight: 1.0
- model: SaisExperiments/GemmOwO-2B+kweinmeister/gemma-2-2b-it-dolly-15k
parameters:
weight: 1.0
- model: lmassaron/gemma-2-2b-it-grpo-gsm8k
parameters:
weight: 1.0
- model: benjamin/Gemma2-2B-Distilled-Math
parameters:
weight: 1.0
- model: zli12321/prometheus2-2B
parameters:
weight: 1.0
- model: Arnic/Gemma-2-2b-it-chat-medicare
parameters:
weight: 1.0
- model: IlyaGusev/gemma-2-2b-it-abliterated+monsterapi/gemma-2-2b-norobots
parameters:
weight: 1.0
- model: minchyeom/ThinkerGemma
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for DreadPoor/Egglet-2B-LINEAR
Merge model
this model