Text Generation
Transformers
PyTorch
Chinese
English
llama
text-generation-inference
File size: 3,708 Bytes
1451ec7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15be938
b415dc1
 
 
 
 
 
 
 
1451ec7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b415dc1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: other
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- PengQu/langchain-MRKL-finetune
language:
- zh
- en
---
# Llama-2-7b-vicuna-Chinese

Llama-2-7b-vicuna-Chinese是在中英双语sharegpt数据上全参数微调的对话模型。

- 基座模型:[meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) 允许商业使用。
- 微调数据:ShareGPT,ShareGPT-ZH,Langchain-MRKL-finetune
- 训练代码:基于[FastChat](https://github.com/lm-sys/FastChat)

Llama-2-7b-vicuna-Chinese is a chat model supervised finetuned on vicuna sharegpt data in both **English** and **Chinese**.

- Foundation model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf), a **commercially available** language model.
- Finetuning data: ShareGPT,ShareGPT-ZH,Langchain-MRKL-finetune
- Training code: based on [FastChat](https://github.com/lm-sys/FastChat)

#### 主要改进:中英能力相比Llama2原版和vicuna均有提升
- 英语能力基础评测(MMLU): Llama-2-7b-vicuna-Chinese(48.8) > Llama-2-7b(45.3) > vicuna1.1(44.8)
- 中文能力基础评测(C-Eval):  Llama-2-7b-vicuna-Chinese(34.7) > Llama-2-7b-chat(30.3)= vicuna1.1(30.3)
- 实测效果:不会出现Llama2-chat的过于谨慎的状况
<br>
- English Eval results(MMLU): Llama-2-7b-vicuna-Chinese(48.8) > Llama-2-7b(45.3) > vicuna1.1(44.8)
- Chinese Eval results(C-Eval):  Llama-2-7b-vicuna-Chinese(34.7) > Llama-2-7b-chat(30.3)= vicuna1.1(30.3)
- Empirical Results: It will not be overly cautious like Llama2-chat.

## Loading the Weights with Hugging Face Transformers
See the following example for usage.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("PengQu/Llama-2-7b-vicuna-Chinese",use_fast=False)
model = AutoModelForCausalLM.from_pretrained("PengQu/Llama-2-7b-vicuna-Chinese").to("cuda")

instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
prompt = instruction.format('用flask写一个简单的http服务器。')
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")

generation_output = model.generate(input_ids=input_ids, max_new_tokens=512)
print(tokenizer.decode(generation_output[0],skip_special_tokens=True))
```

输出如下(output as follows):<br>
```
要使 用Flask创建一个简单的HTTP服务器,您需要在终端中运行以下命令:

pip install Flask

这将安装Flask包并准备您使用它。
以下是一个简单的Flask应用程序,它侦听端口5000并响应“Hello, World!”消息:

from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
    return 'Hello, World!'
if __name__ == '__main__':
    app.run(debug=True)

要运行此应用程序,您可以将代码保存到名为`app.py`的文件中,然后在终端中运行以下命令:

python app.py

这将启动Flask服务器,并且您应该能够通过在Web浏览器中访问`http://localhost:5000`来访问“Hello, World!”消息。
您还可以使用Flask的路由功能为不同的URL定义不同的端点。例如,您可以定义一个路由,在端口8080上侦听,并响应“Hello, Flask!”消息:

from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
    return 'Hello, Flask!'
if __name__ == '__main__':
    app.run(debug=True, host='localhost', port=8080)

要运行此应用程序,您可以将代码保存到名为`app.py`的文件中,然后在终端中运行以下命令:

python app.py

```