akhilpandey95 commited on
Commit
c800f21
·
verified ·
1 Parent(s): 830e9bf

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,3 +1,177 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - galactica
5
+
6
+ widget:
7
+ - text: "The Transformer architecture [START_REF]"
8
+ - text: "The Schwarzschild radius is defined as: \\["
9
+ - text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
10
+ - text: "Lecture 1: The Ising Model\n\n"
11
+ - text: "[START_I_SMILES]"
12
+ - text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
13
+ inference: false
14
+ ---
15
+
16
+ ![logo](https://s3.amazonaws.com/moonup/production/uploads/1668679814649-62441d1d9fdefb55a0b7d12c.png)
17
+
18
+
19
+ # GALACTICA 1.3B (base)
20
+
21
+ Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
22
+
23
+ Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
24
+
25
+ ## Model Details
26
+
27
+ The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
28
+
29
+ | Size | Parameters |
30
+ |:-----------:|:-----------:|
31
+ | `mini` | 125 M |
32
+ | `base` | 1.3 B |
33
+ | `standard` | 6.7 B |
34
+ | `large` | 30 B |
35
+ | `huge` | 120 B |
36
+
37
+
38
+ ## Release Date
39
+
40
+ November 2022
41
+
42
+ ## Model Type
43
+
44
+ Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
45
+
46
+ ## Paper & Demo
47
+
48
+ [Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
49
+
50
+ ## Model Use
51
+
52
+ The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
53
+
54
+ The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
55
+
56
+ ## Training Data
57
+
58
+ The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
59
+
60
+ ## How to use
61
+
62
+ Find below some example scripts on how to use the model in `transformers`:
63
+
64
+ ## Using the Pytorch model
65
+
66
+ ### Running the model on a CPU
67
+
68
+ <details>
69
+ <summary> Click to expand </summary>
70
+
71
+ ```python
72
+
73
+ from transformers import AutoTokenizer, OPTForCausalLM
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
76
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b")
77
+
78
+ input_text = "The Transformer architecture [START_REF]"
79
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
80
+
81
+ outputs = model.generate(input_ids)
82
+ print(tokenizer.decode(outputs[0]))
83
+ ```
84
+
85
+ </details>
86
+
87
+ ### Running the model on a GPU
88
+
89
+ <details>
90
+ <summary> Click to expand </summary>
91
+
92
+ ```python
93
+ # pip install accelerate
94
+ from transformers import AutoTokenizer, OPTForCausalLM
95
+
96
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
97
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto")
98
+
99
+ input_text = "The Transformer architecture [START_REF]"
100
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
101
+
102
+ outputs = model.generate(input_ids)
103
+ print(tokenizer.decode(outputs[0]))
104
+ ```
105
+
106
+ </details>
107
+
108
+ ### Running the model on a GPU using different precisions
109
+
110
+ #### FP16
111
+
112
+ <details>
113
+ <summary> Click to expand </summary>
114
+
115
+ ```python
116
+ # pip install accelerate
117
+ import torch
118
+ from transformers import AutoTokenizer, OPTForCausalLM
119
+
120
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
121
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto", torch_dtype=torch.float16)
122
+
123
+ input_text = "The Transformer architecture [START_REF]"
124
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
125
+
126
+ outputs = model.generate(input_ids)
127
+ print(tokenizer.decode(outputs[0]))
128
+ ```
129
+
130
+ </details>
131
+
132
+ #### INT8
133
+
134
+ <details>
135
+ <summary> Click to expand </summary>
136
+
137
+ ```python
138
+ # pip install bitsandbytes accelerate
139
+ from transformers import AutoTokenizer, OPTForCausalLM
140
+
141
+ tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
142
+ model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto", load_in_8bit=True)
143
+
144
+ input_text = "The Transformer architecture [START_REF]"
145
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
146
+
147
+ outputs = model.generate(input_ids)
148
+ print(tokenizer.decode(outputs[0]))
149
+ ```
150
+
151
+ </details>
152
+
153
+
154
+ ## Performance and Limitations
155
+
156
+ The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
157
+
158
+ As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
159
+
160
+ In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
161
+
162
+ ## Broader Implications
163
+
164
+ GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
165
+
166
+ We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
167
+
168
+
169
+ ## Citation
170
+
171
+ ```bibtex
172
+ @inproceedings{GALACTICA,
173
+ title={GALACTICA: A Large Language Model for Science},
174
+ author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
175
+ year={2022}
176
+ }
177
+ ```
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/content/base",
3
+ "_remove_final_layer_norm": false,
4
+ "activation_dropout": 0.0,
5
+ "activation_function": "gelu",
6
+ "architectures": [
7
+ "OPTForCausalLM"
8
+ ],
9
+ "attention_dropout": 0.1,
10
+ "enable_bias": true,
11
+ "bos_token_id": 0,
12
+ "do_layer_norm_before": true,
13
+ "dropout": 0.1,
14
+ "eos_token_id": 2,
15
+ "ffn_dim": 8192,
16
+ "hidden_size": 2048,
17
+ "init_std": 0.02,
18
+ "layer_norm_elementwise_affine": true,
19
+ "layerdrop": 0.0,
20
+ "learned_embeddings": true,
21
+ "max_position_embeddings": 2048,
22
+ "model_type": "opt",
23
+ "num_attention_heads": 32,
24
+ "num_hidden_layers": 24,
25
+ "pad_token_id": 1,
26
+ "scale_embeddings": false,
27
+ "torch_dtype": "float16",
28
+ "transformers_version": "4.21.0.dev0",
29
+ "use_cache": true,
30
+ "vocab_size": 50000,
31
+ "word_embed_proj_dim": 2048
32
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 1,
6
+ "transformers_version": "4.27.0.dev0"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6025bcb14e5169f20cb27f2df0e0dfac7475b1e53bac8c296df2e9b029a9309b
3
+ size 2835247696
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "/content/tokenizer",
3
+ "special_tokens_map_file": "/content/tokenizer/special_tokens_map.json",
4
+ "tokenizer_class": "PreTrainedTokenizerFast"
5
+ }