namespace-Pt commited on
Commit
30b1390
·
verified ·
1 Parent(s): bf9b66d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -26
README.md CHANGED
@@ -6,15 +6,7 @@ pipeline_tag: text-generation
6
 
7
  # Intro
8
 
9
- [Activation Beacon](https://arxiv.org/abs/2401.03462) compresses the original KV into fewer yet more compact states (a.k.a. beacons) and hence enables the LLM to perceive longer context given its fixed context window. It is known for the following features:
10
- - **Effective**
11
- - there is little information loss given a compression ratio of 2, 4, and 8;
12
- - **Efficient**
13
- - it drastically reduces the GPU consumption of KV cache;
14
- - **Compatible**
15
- - it can work together with position extrapolation (e.g. YaRN) to further extends the context length; it can also work with grouped query attention to further reduce the KV cache size;
16
- - **Low-Cost**
17
- - it is light-weight and can be efficiently trained with roughly 1B tokens.
18
 
19
  # Environment
20
  ```
@@ -63,20 +55,4 @@ with torch.no_grad():
63
  print(f"Answers: {example['answer']}")
64
  print(f"Prediction: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
65
  ```
66
- **NOTE**: It's okay to see warnings like `This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (32768). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` Just ignore it.
67
-
68
-
69
- # Results
70
-
71
-
72
-
73
- ## LongBench
74
-
75
- | Model | Single QA | Multi QA | Summarization | Few-Shot | Code | AVG |
76
- |-------------------------------|-----------|----------|---------------|----------|-------|--------|
77
- | qwen-2-7b-instruct | 39.60 | 36.92 | 27.97 | 71.12 | 62.34 | 47.59 |
78
- | beacon-qwen-2-7b-instruct | 40.76 | 43.73 | 27.23 | 68.87 | 68.47 | 49.81 |
79
-
80
- ## NIAH
81
-
82
- ![](needle.png)
 
6
 
7
  # Intro
8
 
9
+ [Activation Beacon](https://arxiv.org/abs/2401.03462) is a plug-in module to transformer-based LLMs that enables effective, efficient, and flexible compression of long contexts.
 
 
 
 
 
 
 
 
10
 
11
  # Environment
12
  ```
 
55
  print(f"Answers: {example['answer']}")
56
  print(f"Prediction: {tokenizer.decode(outputs[0], skip_special_tokens=True)}")
57
  ```
58
+ **NOTE**: It's okay to see warnings like `This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (32768). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` Just ignore it.