Merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Sorawiz/Qwen2.5-14B-Instinct-Chat as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

name: Sorawiz/Qwen2.5-Zaralise-A
merge_method: dare_ties
base_model: aixonlab/Zara-14b-v1.2
models:
  - model: aixonlab/Zara-14b-v1.2
    parameters:
      weight: 0.30
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.60
  - model: Sorawiz/Qwen2.5-14B-GCC
    parameters:
      weight: 0.10
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Zaralise-B
merge_method: dare_ties
base_model: aixonlab/Zara-14b-v1.2
models:
  - model: aixonlab/Zara-14b-v1.2
    parameters:
      weight: 0.30
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.60
  - model: Sorawiz/Qwen2.5-14B-GCC
    parameters:
      weight: 0.10
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Zaralise-C
slices:
  - sources:
      - model: aixonlab/Zara-14b-v1.2
        layer_range: [0, 48]
      - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
        layer_range: [0, 48]
merge_method: slerp
base_model: aixonlab/Zara-14b-v1.2
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float32

---

name: Sorawiz/Qwen2.5-Zaralise-D
slices:
  - sources:
      - model: aixonlab/Zara-14b-v1.2
        layer_range: [0, 48]
      - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
        layer_range: [0, 48]
merge_method: slerp
base_model: aixonlab/Zara-14b-v1.2
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float32

---

name: Sorawiz/Qwen2.5-ZaraTimpist-Base
models:
  - model: aixonlab/Zara-14b-v1.2
  - model: aixonlab/Zara-14b-v1.2
    parameters:
      density: 1.00
      weight: 1.00
  - model: Sorawiz/Qwen2.5-Zaralise-A
    parameters:
      density: 1.00
      weight: 1.00
  - model: Sorawiz/Qwen2.5-Zaralise-B
    parameters:
      density: 1.00
      weight: 1.00
  - model: Sorawiz/Qwen2.5-Zaralise-C
    parameters:
      density: 1.00
      weight: 1.00
  - model: Sorawiz/Qwen2.5-Zaralise-D
    parameters:
      density: 1.00
      weight: 1.00
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      density: 1.00
      weight: 1.00
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      density: 1.00
      weight: 1.00
merge_method: ties
base_model: aixonlab/Zara-14b-v1.2
parameters:
  normalize: true
dtype: bfloat16

---

name: Sorawiz/Qwen2.5-14B-ZaraTimpist
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-ZaraTimpist-Base
models:
  - model: Sorawiz/Qwen2.5-ZaraTimpist-Base
    parameters:
      weight: 0.30
  - model: aixonlab/Zara-14b-v1.2
    parameters:
      weight: 0.20
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.25
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.25
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Freyalise-A
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Freya-x1
models:
  - model: Sao10K/14B-Qwen2.5-Freya-x1
    parameters:
      weight: 0.40
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.60
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Freyalise-B
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Freya-x1
models:
  - model: Sao10K/14B-Qwen2.5-Freya-x1
    parameters:
      weight: 0.40
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.60
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-FreyaTimpist-Base
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Freya-x1
models:
  - model: Sao10K/14B-Qwen2.5-Freya-x1
    parameters:
      weight: 1
  - model: Sorawiz/Qwen2.5-Freyalise-A
    parameters:
      weight: 1
  - model: Sorawiz/Qwen2.5-Freyalise-B
    parameters:
      weight: 1
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 1
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 1
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-14B-FreyaTimpist
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-FreyaTimpist-Base
models:
  - model: Sorawiz/Qwen2.5-FreyaTimpist-Base
    parameters:
      weight: 0.30
  - model: Sao10K/14B-Qwen2.5-Freya-x1
    parameters:
      weight: 0.20
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.25
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.25
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Kunoulise-A
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
models:
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
    parameters:
      weight: 0.40
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.60
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Kunoulise-B
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
models:
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
    parameters:
      weight: 0.40
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.60
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-Kunoulise-C
slices:
  - sources:
      - model: Sao10K/14B-Qwen2.5-Kunou-v1
        layer_range: [0, 48]
      - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
        layer_range: [0, 48]
merge_method: slerp
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float32

---

name: Sorawiz/Qwen2.5-Kunoulise-D
slices:
  - sources:
      - model: Sao10K/14B-Qwen2.5-Kunou-v1
        layer_range: [0, 48]
      - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
        layer_range: [0, 48]
merge_method: slerp
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float32

---

name: Sorawiz/Qwen2.5-KunouTimpist-Base
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
models:
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
    parameters:
      weight: 0.2
  - model: Sorawiz/Qwen2.5-Kunoulise-A
    parameters:
      weight: 0.1
  - model: Sorawiz/Qwen2.5-Kunoulise-B
    parameters:
      weight: 0.1
  - model: Sorawiz/Qwen2.5-Kunoulise-C
    parameters:
      weight: 0.1
  - model: Sorawiz/Qwen2.5-Kunoulise-D
    parameters:
      weight: 0.1
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.2
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.2
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-14B-KunouTimpist
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-KunouTimpist-Base
models:
  - model: Sorawiz/Qwen2.5-KunouTimpist-Base
    parameters:
      weight: 0.40
  - model: Sao10K/14B-Qwen2.5-Kunou-v1
    parameters:
      weight: 0.10
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.25
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.25
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

name: Sorawiz/Qwen2.5-14B-Instinct-Chat
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Instinct-RP
models:
  - model: Sorawiz/Qwen2.5-14B-Instinct-RP
    parameters:
      weight: 0.20
  - model: Sorawiz/Qwen2.5-14B-ZaraTimpist
    parameters:
      weight: 0.20
  - model: Sorawiz/Qwen2.5-14B-FreyaTimpist
    parameters:
      weight: 0.20
  - model: Sorawiz/Qwen2.5-14B-KunouTimpist
    parameters:
      weight: 0.20
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.10
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.10
parameters:
  density: 1
tokenizer:
  source: union
chat_template: auto

---

merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Instinct-Chat
models:
  - model: Sorawiz/Qwen2.5-14B-Instinct-Chat
    parameters:
      weight: 0.6
  - model: Sorawiz/Qwen2.5-14B-GCC
    parameters:
      weight: 0.2
  - model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
    parameters:
      weight: 0.1
  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M
    parameters:
      weight: 0.1
parameters:
  density: 0.50
tokenizer:
  source: union
chat_template: auto
Downloads last month
30
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sorawiz/Qwen2.5-14B-Instinct-Talk

Collection including Sorawiz/Qwen2.5-14B-Instinct-Talk