Model Card for Model ID

prepared with mergekit

quantized with bitsandbytes 4q_k_m

config.yaml

base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
gate_mode: cheap_embed
experts:
  - source_model: HuggingFaceH4/zephyr-7b-beta
    positive_prompts: ["You are an helpful general-pupose assistant."]
  - source_model: mistralai/Mistral-7B-Instruct-v0.2
    positive_prompts: ["You are helpful assistant."]
Downloads last month
8
Safetensors
Model size
6.77B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including uisikdag/Mixmistral-2x7b-4bit-bitsnbytes