LLM / VLM Quantization
Collection
3 items
•
Updated
prepared with mergekit
quantized with bitsandbytes 4q_k_m
config.yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
gate_mode: cheap_embed
experts:
- source_model: HuggingFaceH4/zephyr-7b-beta
positive_prompts: ["You are an helpful general-pupose assistant."]
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts: ["You are helpful assistant."]