AnotherOne-Unslop-Mell-12B

AnotherOne-Unslop-Mell-12B is a 12B parameter language model merged for roleplay and storytelling. It is a custom merge of two highly specialized models using MergeKit, blending dynamic narration with character consistency to enhance interactive storytelling.


✨ Merge Philosophy

This model combines the eagerness, rich vocabulary, and action-oriented narration of UnslopNemo-12B-v4 with the detailed character consistency and emotional realism of MN-12B-Mag-Mell-R1. The goal was to produce a roleplay LLM that maintains engaging prose while retaining stable identities and tone across long interactions.

The resulting model is suitable for character-driven experiences, interactive fiction, and persistent narrative environments where stylistic depth and emotional continuity are essential.


⚙️ Recommended Settings

Prompt format:

  • Format: ChatML

Prediction settings:

  • temperature: 0.69 (0.4 - 1.5)
  • repeat penalty: Disabled
  • top_k: 0
  • top_p: Disabled
  • min_p: 0.05
  • maxPredictedTokens: Unlimited

🧊 Available Quants


🔬 Merge Details

Merge Method

This model was merged using the DARE-TIES method, which enables fine-grained control over layer composition and preserves activation sparsity while reducing destructive interference. The base model for DARE was UnslopNemo-12B-v4.


Models Merged


🧪 Merge Configuration

The following MergeKit YAML configuration was used:

dtype: bfloat16
merge_method: dare_ties
base_model: UnslopNemo-12B-v4

parameters:
  use_int8_mask: true
  normalize: false

slices:
  - sources:
      - model: UnslopNemo-12B-v4
        layer_range: [0, 10]
        parameters:
          weight: 0.7
      - model: MN-12B-Mag-Mell-R1
        layer_range: [0, 10]
        parameters:
          weight: 0.3

  - sources:
      - model: UnslopNemo-12B-v4
        layer_range: [10, 20]
        parameters:
          weight: 0.5
      - model: MN-12B-Mag-Mell-R1
        layer_range: [10, 20]
        parameters:
          weight: 0.5

  - sources:
      - model: UnslopNemo-12B-v4
        layer_range: [20, 30]
        parameters:
          weight: 0.35
      - model: MN-12B-Mag-Mell-R1
        layer_range: [20, 30]
        parameters:
          weight: 0.65

  - sources:
      - model: UnslopNemo-12B-v4
        layer_range: [30, 40]
        parameters:
          weight: 0.4
      - model: MN-12B-Mag-Mell-R1
        layer_range: [30, 40]
        parameters:
          weight: 0.6
Downloads last month
21
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AIvel/AnotherOne-Unslop-Mell-12B