nhe-ai/Llasa-1B-Multilingual-mlx-4Bit
The Model nhe-ai/Llasa-1B-Multilingual-mlx-4Bit was converted to MLX format from HKUSTAudio/Llasa-1B-Multilingual using mlx-lm version 0.22.3.
⚠️ Important: This model was automatically converted for experimentation. The following guide was not designed for this model and may not work as expected. Do not expect to function out of the box. Use at your own experimentation.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("nhe-ai/Llasa-1B-Multilingual-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for nhe-ai/Llasa-1B-Multilingual-mlx-4Bit
Base model
meta-llama/Llama-3.2-1B-Instruct
Finetuned
HKUSTAudio/Llasa-1B-Multilingual