DXP-Zero-V1.0-24b-Small-Instruct

So i was browsing for Mistral finetune and found this base model by ZeroAgency, and oh boy... It was perfect! So here are few notable improvements i observed.

Pros:

  • Increased output for storytelling or roleplay.
  • Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too).
  • Less repetitive (though it depends on your own prompt and settings).
  • I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.).

Tested genres:

  • Romance/Bromance

Added note: I was testing using my own quantization i1-Q5-K-M. Download i1-GGUF here.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the TIES merge method using ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
    parameters:
      density: 0.7
      weight: 0.7
  - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
    parameters:
      density: 0.5
      weight: 0.5
      
merge_method: ties
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
tokenizer: 
 source: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
Downloads last month
3
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for h34v7/DXP-Zero-V1.0-24b-Small-Instruct

Collection including h34v7/DXP-Zero-V1.0-24b-Small-Instruct