zhangdonghao commited on
Commit
535e880
·
verified ·
1 Parent(s): da05ced

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: text-generation
12
  # Ring-lite-linear-preview
13
 
14
  <p align="center">
15
- <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/blob/main/ant-bailing.png" width="100"/>
16
  <p>
17
 
18
  <p align="center">
@@ -52,12 +52,12 @@ In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achi
52
  To evaluate the generation throughput, we deploy Ring-lite-linear and the softmax-attention-based Ring-lite based on vLLM on a single NVIDIA A100 GPU. Specifically, the input sequence length is fixed to 1. The end-to-end (E2E) generation time required for generating output sequences of varying lengths is illustrated below. It is shown in the figure that at 32k output length, Ring-lite-linear-preview achieves 2.2× throughput of Ring-lite.
53
 
54
  <p align="center">
55
- <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/blob/main/throughput.png" width="600"/>
56
  <p>
57
 
58
  Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
59
  <p align="center">
60
- <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/blob/main/inference_speed.gif" width="600"/>
61
  <p>
62
 
63
  More details will be reported in our technical report [TBD]
 
12
  # Ring-lite-linear-preview
13
 
14
  <p align="center">
15
+ <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/ant-bailing.png" width="100"/>
16
  <p>
17
 
18
  <p align="center">
 
52
  To evaluate the generation throughput, we deploy Ring-lite-linear and the softmax-attention-based Ring-lite based on vLLM on a single NVIDIA A100 GPU. Specifically, the input sequence length is fixed to 1. The end-to-end (E2E) generation time required for generating output sequences of varying lengths is illustrated below. It is shown in the figure that at 32k output length, Ring-lite-linear-preview achieves 2.2× throughput of Ring-lite.
53
 
54
  <p align="center">
55
+ <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/throughput.png" width="600"/>
56
  <p>
57
 
58
  Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
59
  <p align="center">
60
+ <img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/inference_speed.gif" width="600"/>
61
  <p>
62
 
63
  More details will be reported in our technical report [TBD]