merged-model-2.5MathHeavy

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using unsloth/DeepSeek-R1-Distill-Qwen-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

# merge_ties.yml

# 1. Overall merge method: TIES (sign-elect sparse task arithmetic)
merge_method: ties                                      

# 2. Base model (all task vectors are computed relative to this checkpoint)
base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B   

# 3. Full models to merge (base first, then others)
models:
  - model: unsloth/DeepSeek-R1-Distill-Qwen-7B       # base has no extra params
  - model: nvidia/AceMath-7B-Instruct
    parameters:
      weight: 0.3
      density: 0.7
  - model: Qwen/Qwen2.5-Math-7B-Instruct
    parameters:
      weight: 0.7
      density: 0.7

# 4. Global merge parameters
parameters:
  normalize: true        # normalize weights across models
  int8_mask: true        # mask small values when using int8 backing

# 5. Data type for merged tensors
dtype: bfloat16
Downloads last month
10
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CK0607/Tie-Merged-Qwen-7B-2.5MathHeavy