MaziyarPanahi/Calme-12B-Instruct-v0.1

Model Description

Calme-12B is a state-of-the-art language model with 12 billion parameters, merged and fine-tuned over high-quality datasets on top of Calme-7B-Instruct-v0.9. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence.

How to Use

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-12B-Instruct-v0.1")

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1")

Quantized Models

I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!

Downloads last month
12
Safetensors
Model size
12.5B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MaziyarPanahi/Calme-12B-Instruct-v0.1

Quantizations
2 models

Collection including MaziyarPanahi/Calme-12B-Instruct-v0.1