L3-Hecate-8B-v1.0

Hecate

About:

This is a merge of pre-trained language models created using mergekit.

Recommended Samplers:

Temperature - 1.0
TFS - 0.85
Smoothing Factor - 0.3
Smoothing Curve - 1.1
Repetition Penalty - 1.1

Merge Method

This model was merged a series of model stock and lora merges, followed by ExPO. It uses a mix of smart and roleplay centered models to improve performance.

Configuration

The following YAML configuration was used to produce this model:

---
models:
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: Jellywibble/lora_120k_pref_data_ep2
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
merge_method: model_stock
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
dtype: float32
vocab_type: bpe
name: hq_rp

---
# ExPO
models:
  - model: hq_rp
    parameters:
      weight: 1.25
merge_method: task_arithmetic
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
  normalize: false
dtype: float32
vocab_type: bpe
Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Azazelle/L3-Hecate-8B-v1.0

Quantizations
2 models

Collection including Azazelle/L3-Hecate-8B-v1.0