XinxingYang commited on
Commit
e144ca5
·
verified ·
1 Parent(s): 990a43e

create readme file (#1)

Browse files

- create readme file (6fc348178d39a11fd7d75c1f342b3e7718465c8a)

Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ base_model:
7
+ - inclusionAI/Ling-lite
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+
12
+ # Ring-lite-linear-preview
13
+
14
+ <p align="center">
15
+ <img src="https://huggingface.co/inclusionAI/Ring-lite-distill-preview/resolve/main/ant-bailing.png" width="100"/>
16
+ <p>
17
+
18
+ <p align="center">
19
+ 🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
20
+ <p>
21
+
22
+ ## Introduction
23
+
24
+ Ring-lite-linear-preview is a hybrid-linear MoE LLM provided and open-sourced by InclusionAI, which has 17.1B parameters with 3.0B activated parameters. It is a long reasoning model based on hybrid-linear attention, achieving near-linear computational complexity and near-constant space complexity during inference. This model was converted from [Ling-lite-0220](https://huggingface.co/models/inclusionAI/Ling-lite), which adopts the softmax attention-based architecture. It matches the performance of DeepSeek-R1-Distill-Qwen-7B on standardized reasoning benchmarks while substantially reducing computational overhead in both training and inference phases. In certain generation speed tests based on vLLM, we observed that the throughput was more than doubled compared to softmax attention models of the same scale (e.g., Ling-lite). To the best of our knowledge, it is the first open-source hybrid-linear reasoning language model.
25
+ ## Model Downloads
26
+
27
+ <div align="center">
28
+
29
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
30
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
31
+ | Ring-lite-linear-preview | 17.1B | 3.0B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-distill)|
32
+
33
+ </div>
34
+
35
+ ## Evaluation
36
+
37
+ In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achieves 55.0 on AIME24 and 93.8 on MATH-500.
38
+
39
+ <div align="center">
40
+
41
+ | **Model** | **AIME24** | **MATH-500** | **GPQA-diamond** | **LiveCodeBench** |
42
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
43
+ | DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
44
+ | DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
45
+ | Ring-lite-distill-preview-Stage-1 | 54.2 | 93.5 | 47.5 | 32.9 |
46
+ | Ring-lite-linear-preview | 55.0 | 93.8 | 46.5 | 29.8 |
47
+
48
+ </div>
49
+
50
+ ## Inference Speed
51
+
52
+ To evaluate the generation throughput, we deploy Ring-lite-linear and the softmax-attention-based Ring-lite based on vLLM on a single NVIDIA A100 GPU. Specifically, the input sequence length is fixed to 1. The end-to-end (E2E) generation time required for generating output sequences of varying lengths is illustrated below. It is shown in the figure that at 32k output length, Ring-lite-linear-preview achieves 2.2× throughput of Ring-lite.
53
+
54
+ <p align="center">
55
+ <img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=throughput.png&View=true" width="600"/>
56
+ <p>
57
+
58
+ Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
59
+ <p align="center">
60
+ <img src="https://modelscope.cn/api/v1/models/inclusionAI/Ring-lite-linear-preview/repo?Revision=master&FilePath=inference_speed.gif&View=true" width="600"/>
61
+ <p>
62
+
63
+ More details will be reported in our technical report [TBD]
64
+
65
+ ## Requirements
66
+ - [transformers](https://github.com/huggingface/transformers) >= 4.48.3
67
+ - [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) >= 0.2.1
68
+
69
+ ## Quickstart
70
+
71
+ Here is a code snippet to show you how to use the chat model with `modelscope`:
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+
76
+ model_name = "inclusionAI/Ring-lite-linear-preview"
77
+
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ model_name,
80
+ torch_dtype="auto",
81
+ device_map="auto"
82
+ )
83
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
84
+
85
+ prompt = "Give me a short introduction to large language models."
86
+ messages = [
87
+ {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
88
+ {"role": "user", "content": prompt}
89
+ ]
90
+ text = tokenizer.apply_chat_template(
91
+ messages,
92
+ tokenize=False,
93
+ add_generation_prompt=True
94
+ )
95
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
96
+
97
+ generated_ids = model.generate(
98
+ **model_inputs,
99
+ max_new_tokens=8192
100
+ )
101
+ generated_ids = [
102
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
103
+ ]
104
+
105
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
106
+ ```
107
+
108
+ ## Deployment
109
+
110
+ Please refer to [Github](TBD)
111
+
112
+ ## Dataset
113
+
114
+ The long reasoning sft data: [Ring-lite-distill-preview-sft-data](https://huggingface.co/datasets/inclusionAI/Ring-lite-distill-preview-sft-data)
115
+
116
+
117
+ ## License
118
+ This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-distill/blob/main/LICENSE).
119
+
120
+ ## Citation
121
+ [TBD]