NOT FOR USE -- BUG IN RESPONSE

NeuTrixOmniBe-7B-model-remix

NeuTrixOmniBe-7B-model-remix is a merge of the following models using LazyMergekit:

๐Ÿงฉ Configuration

slices:
  - sources:
      - model: CultriX/NeuralTrix-7B-dpo
        layer_range: [0, 32]
      - model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

๐Ÿ’ป Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Kukedlc/NeuTrixOmniBe-7B-model-remix"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.30
AI2 Reasoning Challenge (25-Shot) 72.70
HellaSwag (10-Shot) 89.03
MMLU (5-Shot) 64.57
TruthfulQA (0-shot) 76.90
Winogrande (5-shot) 85.08
GSM8k (5-shot) 69.52
Downloads last month
15
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Kukedlc/NeuTrixOmniBe-7B-model-remix

Space using Kukedlc/NeuTrixOmniBe-7B-model-remix 1

Evaluation results