Transformers
PyTorch
Chinese
megatron-bert
wanng commited on
Commit
c5909bb
·
1 Parent(s): 9cfe461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -15
README.md CHANGED
@@ -5,17 +5,52 @@ license: apache-2.0
5
  widget:
6
  - text: "生活的真谛是[MASK]。"
7
  ---
8
- # Zhouwenwang-Unified-1.3B model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
9
- Zhouwenwang-Unified-1.3B apply a new unified structure, and jointly developed by the IDEA-CCNL and Zhuiyi Technology. In the pre-training, the model considers LM (Language Model) and MLM (Mask Language Model) tasks uniformly, and adds rotational position coding, so that the model has the ability to generate and understand. Zhouwenwang-Unified-1.3B is the largest model for LM and MLM tasks in the Chinese field. It will continue to be optimized in the direction of model scale, knowledge integration, and supervision task assistance.
10
 
11
- ## Usage
12
- There is no structure of Zhouwenwang-Unified-1.3B in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of Zhouwenwang-Unified-1.3B from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ```shell
15
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
16
  ```
17
 
18
- ### Load model
 
19
  ```python
20
  from fengshen import RoFormerModel
21
  from fengshen import RoFormerConfig
@@ -24,11 +59,13 @@ from transformers import BertTokenizer
24
  tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
25
  config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
26
  model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
 
27
 
 
28
 
29
- ```
30
- ### Generate task
31
- You can use Zhouwenwang-1.3B to continue writing
32
 
33
  ```python
34
  from fengshen import RoFormerModel
@@ -57,15 +94,27 @@ for i in range(max_length):
57
  print(sentence)
58
  ```
59
 
60
- ## Scores on downstream chinese tasks (without any data augmentation)
61
- | Model| afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
62
- | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
63
- | roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
64
- | Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
65
 
66
- ## Citation
67
- If you find the resource is useful, please cite the following website in your paper.
 
 
 
 
 
 
 
 
68
  ```
 
 
 
 
 
 
69
  @misc{Fengshenbang-LM,
70
  title={Fengshenbang-LM},
71
  author={IDEA-CCNL},
 
5
  widget:
6
  - text: "生活的真谛是[MASK]。"
7
  ---
8
+ # Zhouwenwang-Unified-1.3B
 
9
 
10
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
11
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
12
+
13
+ ## 简介 Brief Introduction
14
+
15
+ 与追一科技合作探索的中文统一模型,13亿参数的编码器结构模型。
16
+
17
+ The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 1.3B parameters.
18
+
19
+ ## 模型分类 Model Taxonomy
20
+
21
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
22
+ | :----: | :----: | :----: | :----: | :----: | :----: |
23
+ | 特殊 Special | 探索 Exploration | 周文王 Zhouwenwang | 待定 TBD | 1.3B | - |
24
+
25
+ ## 模型信息 Model Information
26
+
27
+ IDEA研究院认知计算中心联合追一科技有限公司提出的具有新结构的大模型。该模型在预训练阶段时考虑统一LM和MLM的任务,这让其同时具备生成和理解的能力,并且增加了旋转位置编码技术。目前已有13亿参数的Zhouwenwang-Unified-1.3B大模型,是中文领域中可以同时做LM和MLM任务的最大的模型。我们后续会持续在模型规模、知识融入、监督辅助任务等方向不断优化。
28
+
29
+ A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. At present, Zhouwenwang-Unified-1.3B with 13B parameters is the largest Chinese model that can do both LM and MLM tasks. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
30
+
31
+ ### 下游任务 Performance
32
+
33
+ 下游中文任务的得分(没有做任何数据增强)。
34
+
35
+ Scores on downstream chinese tasks (without any data augmentation)
36
+
37
+ | 模型 Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
38
+ | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
39
+ | roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.7770 | 0.8140 | 0.8914 | 0.8600 |
40
+ | Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
41
+
42
+ ## 使用 Usage
43
+
44
+ 因为[transformers](https://github.com/huggingface/transformers)库中是没有 Zhouwenwang-Unified-1.3B相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
45
+
46
+ Since there is no structure of Zhouwenwang-Unified-1.3B in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
47
 
48
  ```shell
49
  git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
50
  ```
51
 
52
+ ### 加载模型 Loading Models
53
+
54
  ```python
55
  from fengshen import RoFormerModel
56
  from fengshen import RoFormerConfig
 
59
  tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
60
  config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
61
  model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
62
+ ```
63
 
64
+ ### 使用示例 Usage Examples
65
 
66
+ 你可以使用该模型进行续写任务。
67
+
68
+ You can use the model for continuation writing tasks.
69
 
70
  ```python
71
  from fengshen import RoFormerModel
 
94
  print(sentence)
95
  ```
96
 
97
+ ## 引用 Citation
98
+
99
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
 
 
100
 
101
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
102
+
103
+ ```text
104
+ @article{fengshenbang,
105
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
106
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
107
+ journal = {CoRR},
108
+ volume = {abs/2209.02970},
109
+ year = {2022}
110
+ }
111
  ```
112
+
113
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
114
+
115
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
116
+
117
+ ```text
118
  @misc{Fengshenbang-LM,
119
  title={Fengshenbang-LM},
120
  author={IDEA-CCNL},