giangndm's picture
Update README.md
cc80234 verified
|
raw
history blame contribute delete
1.38 kB
---
license: other
license_name: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE
language:
- en
tags:
- multimodal
- mlx
library_name: mlx
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-Omni-7B
---
# giangndm/qwen2.5-omni-7b-mlx
This model [giangndm/qwen2.5-omni-7b-mlx](https://huggingface.co/giangndm/qwen2.5-omni-7b-mlx) was
converted to MLX format from [Qwen/Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B)
using mlx-lm version **0.24.0**.
## Use with mlx (https://github.com/giangndm/mlx-lm-omni)
```bash
uv add mlx-lm-omni
# or
uv add https://github.com/giangndm/mlx-lm-omni.git
```
```python
from mlx_lm_omni import load, generate
import librosa
from io import BytesIO
from urllib.request import urlopen
model, tokenizer = load("giangndm/qwen2.5-omni-7b-mlx-4bit")
audio_path = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"
audio = librosa.load(BytesIO(urlopen(audio_path).read()), sr=16000)[0]
messages = [
{"role": "system", "content": "You are a speech recognition model."},
{"role": "user", "content": "Transcribe the English audio into text without any punctuation marks.", "audio": audio},
]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
```