File size: 3,743 Bytes
093a979 cad2ae3 093a979 6df40a5 093a979 7cf4d11 093a979 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
license: apache-2.0
license_link: https://huggingface.co/Freedman/Qybera2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
- es
- ch
pipeline_tag: text2text-generation
tags:
- chat
library_name: transformers
datasets:
- facebook/natural_reasoning
new_version: Qybera/Qybera2.6-0.5B-instruct
---
# Qybera2.5-0.5B-Instruct
## Introduction
Qybera2.5 is the latest series of Qybera large language models. For Qybera2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qybera2.5 brings the following improvements upon Qybera2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qybera2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.5B
- Number of Paramaters (Non-Embedding): 0.48B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://Qyberalm.github.io/blog/Qybera2.5/), [GitHub](https://github.com/QyberaLM/Qybera2.5), and [Documentation](https://Qybera.readthedocs.io/en/latest/).
## Requirements
The code of Qybera2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'Qybera2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qybera/Qybera2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qybera, created by worldaicorp. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://Qyberalm.github.io/blog/Qybera2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://Qybera.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
``` |