pan-li commited on
Commit
9ba7cd5
·
verified ·
1 Parent(s): 2a6e096

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ md5_to_str.fasta filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for Contact Prediction Dataset for RAGProtein
2
+
3
+ ### Dataset Summary
4
+
5
+ Contact map prediction aims to determine whether two residues, $i$ and $j$, are in contact or not, based on their distance with a certain threshold ($<$8 Angstrom). This task is an important part of the early Alphafold version for structural prediction.
6
+
7
+ ## Dataset Structure
8
+
9
+ ### Data Instances
10
+
11
+ For each instance, there is a string of the protein sequences, a sequence for the contact labels. Each of the sub-labels "[2, 3]" indicates the 3rd residue are in contact with the 4th residue (start from index 0). See the [Contact map prediction dataset viewer](https://huggingface.co/datasets/Bo1015/contact_prediction_binary/viewer/default/test) to explore more examples.
12
+
13
+ ```
14
+ {'seq':'QNLLKNLAASLGRKPFVADKQGVYRLTIDKHLVMLAPHGSELVLRTPIDAPMLREGNNVNVTLLRSLMQQALAWAKRYPQTLVLDDCGQLVLEARLRLQELDTHGLQEVINKQLALLEHLIPQLTP'
15
+ 'label': [ [ 0, 0 ], [ 0, 1 ], [ 1, 1 ], [ 1, 2 ], [ 1, 3 ], [ 1, 101 ], [ 2, 2 ], [ 2, 3 ], [ 2, 4 ], [ 3, 3 ], [ 3, 4 ], [ 3, 5 ], [ 3, 99 ], [ 3, 100 ], [ 3, 101 ], [ 4, 4 ], [ 4, 5 ], [ 4, 53 ], ...], 'msa': 'QNLLKNLAASLGRKPFVADKQGVYRLTIDKHLVMLAPHGSELVLRTPIDAPMLREGNNVNVTLLRSLMQQALAWAKRYPQTLVLDDCGQLVLEARLRLQELDTHGLQEVINKQLALLEHLIPQLTP|QNLLKNLAASLGRKPFVADKQGVYRLTIDKHLVMLAPHGSELVLRTPIDAPMLREGNNVNVTLLRSLMQQALAWAKRYPQTLVLDDCGQLVLEARLRLQELDTHGLQEVINKQLALLEHLIPQLTP...', 'str_emb': [seq_len, 384]}
16
+ ```
17
+
18
+ The average for the `seq` and the `label` are provided below:
19
+
20
+ | Feature | Mean Count |
21
+ | ---------- | ---------------- |
22
+ | seq | 249 |
23
+ | label | 1,500 |
24
+
25
+ ### Data Fields
26
+
27
+ - `seq`: a string containing the protein sequence
28
+ - `label`: a string containing the contact label of each residue pair.
29
+ - `msa`: "|" seperated MSA sequences
30
+ - `str_emb`: AIDO.StructureTokenizer generated structure embedding from AF2 predicted structures
31
+
32
+ ### Data Splits
33
+
34
+ The contact map prediction dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics of the dataset.
35
+
36
+ | Dataset Split | Number of Instances in Split |
37
+ | ------------- | ------------------------------------------- |
38
+ | Train | 12,041 |
39
+ | Validation | 1,505 |
40
+ | Test | 1,505 |
41
+
42
+ ### Source Data
43
+
44
+ #### Initial Data Collection and Normalization
45
+
46
+ The [trRosetta dataset](https://www.pnas.org/doi/10.1073/pnas.1914677117) is employed as the initilized dataset.
47
+
48
+ ### Licensing Information
49
+
50
+ The dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
51
+
52
+ ### Processed data collection
53
+
54
+ Single sequence data are collected from this paper:
55
+
56
+ ```
57
+ @misc{chen2024xtrimopglm,
58
+ title={xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein},
59
+ author={Chen, Bo and Cheng, Xingyi and Li, Pan and Geng, Yangli-ao and Gong, Jing and Li, Shen and Bei, Zhilei and Tan, Xu and Wang, Boyan and Zeng, Xin and others},
60
+ year={2024},
61
+ eprint={2401.06199},
62
+ archivePrefix={arXiv},
63
+ primaryClass={cs.CL},
64
+ note={arXiv preprint arXiv:2401.06199}
65
+ }
66
+ ```
codebook.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52139fb587368235751a0464fd3be7a6beb0fff2e96a0164012f858702a9bcf8
3
+ size 787617
contact_prediction_binary-rag.py ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #-*- coding:utf-8 -*-
2
+
3
+ # import sys, os, shutil, re, logging, subprocess, string, io, argparse, bisect, concurrent, gzip, zipfile, tarfile, json, pickle, time, datetime, random, math, copy, itertools, functools, collections, multiprocessing, threading, queue, signal, inspect, warnings, distutils.spawn
4
+ import sys
5
+ import os
6
+ import pickle
7
+ import re
8
+ import torch
9
+ import random
10
+ import gzip
11
+ from os.path import exists, join, getsize, isfile, isdir, abspath, basename
12
+ from typing import Dict, Union, Optional, List, Tuple, Mapping
13
+ import numpy as np
14
+ import pandas as pd
15
+ from tqdm.auto import trange, tqdm
16
+ from concurrent.futures import ThreadPoolExecutor, as_completed
17
+ from typing import Dict, Union, Optional, List, Tuple, Mapping
18
+ import datasets
19
+
20
+ def get_md5(aa_str):
21
+ """
22
+ Calculate MD5 values for protein sequence
23
+ """
24
+ import hashlib
25
+ assert isinstance(aa_str, str), aa_str
26
+
27
+ aa_str = aa_str.upper()
28
+ return hashlib.md5(aa_str.encode('utf-8')).hexdigest()
29
+
30
+ def load_fasta(seqFn, rem_tVersion=False, load_annotation=False, full_line_as_id=False):
31
+ """
32
+ seqFn -- Fasta file or input handle (with readline implementation)
33
+ rem_tVersion -- Remove version information. ENST000000022311.2 => ENST000000022311
34
+ load_annotation -- Load sequence annotation
35
+ full_line_as_id -- Use the full head line (starts with >) as sequence ID. Can not be specified simutanouly with load_annotation
36
+
37
+ Return:
38
+ {tid1: seq1, ...} if load_annotation==False
39
+ {tid1: seq1, ...},{tid1: annot1, ...} if load_annotation==True
40
+ """
41
+ if load_annotation and full_line_as_id:
42
+ raise RuntimeError("Error: load_annotation and full_line_as_id can not be specified simutanouly")
43
+ if rem_tVersion and full_line_as_id:
44
+ raise RuntimeError("Error: rem_tVersion and full_line_as_id can not be specified simutanouly")
45
+
46
+ fasta = {}
47
+ annotation = {}
48
+ cur_tid = ''
49
+ cur_seq = ''
50
+
51
+ if isinstance(seqFn, str):
52
+ IN = open(seqFn)
53
+ elif hasattr(seqFn, 'readline'):
54
+ IN = seqFn
55
+ else:
56
+ raise RuntimeError(f"Expected seqFn: {type(seqFn)}")
57
+ for line in IN:
58
+ if line[0] == '>':
59
+ if cur_tid != '':
60
+ fasta[cur_tid] = re.sub(r"\s", "", cur_seq)
61
+ cur_seq = ''
62
+ data = line[1:-1].split(None, 1)
63
+ cur_tid = line[1:-1] if full_line_as_id else data[0]
64
+ annotation[cur_tid] = data[1] if len(data)==2 else ""
65
+ if rem_tVersion and '.' in cur_tid:
66
+ cur_tid = ".".join(cur_tid.split(".")[:-1])
67
+ elif cur_tid != '':
68
+ cur_seq += line.rstrip()
69
+
70
+ if isinstance(seqFn, str):
71
+ IN.close()
72
+
73
+ if cur_seq != '':
74
+ fasta[cur_tid] = re.sub(r"\s", "", cur_seq)
75
+
76
+ if load_annotation:
77
+ return fasta, annotation
78
+ else:
79
+ return fasta
80
+
81
+ def load_msa_txt(file_or_stream, load_id=False, load_annot=False, sort=False):
82
+ """
83
+ Read msa txt file
84
+
85
+ Parmeters
86
+ --------------
87
+ file_or_stream: file or stream to read (with read method)
88
+ load_id: read identity and return
89
+
90
+ Return
91
+ --------------
92
+ msa: list of msa sequences, the first sequence in msa is the query sequence
93
+ id_arr: Identity of msa sequences
94
+ annotations: Annotations of msa sequences
95
+ """
96
+ msa = []
97
+ id_arr = []
98
+ annotations = []
99
+
100
+ if hasattr(file_or_stream, 'read'):
101
+ lines = file_or_stream.read().strip().split('\n')
102
+ elif file_or_stream.endswith('.gz'):
103
+ with gzip.open(file_or_stream) as IN:
104
+ lines = IN.read().decode().strip().split('\n')
105
+ else:
106
+ with open(file_or_stream) as IN:
107
+ lines = IN.read().strip().split('\n')
108
+ # lines = open(file_or_stream).read().strip().split('\n')
109
+
110
+ for idx,line in enumerate(lines):
111
+ data = line.strip().split()
112
+ if idx == 0:
113
+ assert len(data) == 1, f"Expect 1 element for the 1st line, but got {data} in {file_or_stream}"
114
+ q_seq = data[0]
115
+ else:
116
+ if len(data) >= 2:
117
+ id_arr.append( float(data[1]) )
118
+ else:
119
+ assert len(q_seq) == len(data[0])
120
+ id_ = round(np.mean([ r1==r2 for r1,r2 in zip(q_seq, data[0]) ]), 3)
121
+ id_arr.append(id_)
122
+ msa.append( data[0] )
123
+ if len(data) >= 3:
124
+ annot = " ".join(data[2:])
125
+ annotations.append( annot )
126
+ else:
127
+ annotations.append(None)
128
+
129
+ id_arr = np.array(id_arr, dtype=np.float64)
130
+ if sort:
131
+ id_order = np.argsort(id_arr)[::-1]
132
+ msa = [ msa[i] for i in id_order ]
133
+ id_arr = id_arr[id_order]
134
+ annotations = [ annotations[i] for i in id_order ]
135
+ msa = [q_seq] + msa
136
+
137
+ outputs = [ msa ]
138
+ if load_id:
139
+ outputs.append( id_arr )
140
+ if load_annot:
141
+ outputs.append( annotations )
142
+ if len(outputs) == 1:
143
+ return outputs[0]
144
+ return outputs
145
+
146
+ # Find for instance the citation on arxiv or on the dataset repo/website
147
+ _CITATION = """
148
+ """
149
+
150
+ # You can copy an official description
151
+ _DESCRIPTION = """
152
+ """
153
+
154
+ _HOMEPAGE = "xxxxx"
155
+
156
+ _LICENSE = "xxxxx"
157
+
158
+ class DownStreamConfig(datasets.BuilderConfig):
159
+ """BuilderConfig for downstream taks dataset."""
160
+
161
+ def __init__(self, *args, **kwargs):
162
+ """BuilderConfig downstream tasks dataset.
163
+ Args:
164
+ **kwargs: keyword arguments forwarded to super.
165
+ """
166
+ super().__init__(*args, name=f"downstream", **kwargs)
167
+
168
+ class DownStreamTasks(datasets.GeneratorBasedBuilder):
169
+ VERSION = datasets.Version("1.1.0")
170
+ BUILDER_CONFIG_CLASS = DownStreamConfig
171
+ BUILDER_CONFIGS = [ DownStreamConfig() ]
172
+ DEFAULT_CONFIG_NAME = None
173
+
174
+ def _info(self):
175
+ features = datasets.Features(
176
+ {
177
+ "seq": datasets.Value("string"),
178
+ "label": datasets.Array2D(shape=(None, 2), dtype='int32'),
179
+ "msa": datasets.Value("string"),
180
+ "str_emb": datasets.Array2D(shape=(None, 384), dtype='float32'),
181
+ }
182
+ )
183
+ return datasets.DatasetInfo(
184
+ # This is the description that will appear on the datasets page.
185
+ description=_DESCRIPTION,
186
+ # This defines the different columns of the dataset and their types
187
+ features=features,
188
+ # Homepage of the dataset for documentation
189
+ homepage=_HOMEPAGE,
190
+ # License for the dataset if available
191
+ license=_LICENSE,
192
+ # Citation for the dataset
193
+ citation=_CITATION,
194
+ )
195
+
196
+ def _split_generators(
197
+ self, dl_manager: datasets.DownloadManager
198
+ ) -> List[datasets.SplitGenerator]:
199
+ train_parquet_file = dl_manager.download(f"data/train-00000-of-00001.parquet")
200
+ valid_parquet_file = dl_manager.download(f"data/valid-00000-of-00001.parquet")
201
+ test_parquet_file = dl_manager.download(f"data/test-00000-of-00001.parquet")
202
+ msa_path = dl_manager.download_and_extract(f"msa.tar")
203
+ str_file = dl_manager.download(f"md5_to_str.fasta")
204
+ codebook_file = dl_manager.download(f"codebook.pt")
205
+
206
+ assert os.path.exists(join(msa_path, 'msa'))
207
+ msa_path = join(msa_path, 'msa')
208
+
209
+ return [
210
+ datasets.SplitGenerator(
211
+ name=datasets.Split.TRAIN,
212
+ gen_kwargs={
213
+ "parquet_file": train_parquet_file,
214
+ "msa_path": msa_path,
215
+ "str_file": str_file,
216
+ "codebook_file": codebook_file
217
+ }
218
+ ),
219
+ datasets.SplitGenerator(
220
+ name=datasets.Split.VALIDATION,
221
+ gen_kwargs={
222
+ "parquet_file": valid_parquet_file,
223
+ "msa_path": msa_path,
224
+ "str_file": str_file,
225
+ "codebook_file": codebook_file
226
+ }
227
+ ),
228
+ datasets.SplitGenerator(
229
+ name=datasets.Split.TEST,
230
+ gen_kwargs={
231
+ "parquet_file": test_parquet_file,
232
+ "msa_path": msa_path,
233
+ "str_file": str_file,
234
+ "codebook_file": codebook_file
235
+ }
236
+ ),
237
+ ]
238
+
239
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
240
+ def _generate_examples(self, parquet_file, msa_path, str_file, codebook_file):
241
+
242
+ dataset = datasets.Dataset.from_parquet(parquet_file)
243
+ md5_to_str = load_fasta(str_file)
244
+ codebook = torch.load(codebook_file, 'cpu', weights_only=True).numpy()
245
+
246
+ for key, item in enumerate(dataset):
247
+ seq = item['seq']
248
+ label = item['label']
249
+ md5_val = get_md5(seq)
250
+ if md5_val not in md5_to_str or md5_to_str[md5_val] == "":
251
+ str_emb = np.zeros([len(seq), 384], dtype=np.float32)
252
+ else:
253
+ str_toks = np.array([ int(x) for x in md5_to_str[md5_val].split('-')])
254
+ str_emb = codebook[str_toks]
255
+
256
+ msa = load_msa_txt(join(msa_path, md5_val+'.txt.gz'))
257
+ assert len(msa[0]) == len(seq), f"Error: {len(msa[0])} != {len(seq)}"
258
+ assert len(msa[0]) == str_emb.shape[0], f"Error: {len(msa[0])} != {str_emb.shape[0]}"
259
+ # breakpoint()
260
+ yield key, {
261
+ "seq": seq,
262
+ "label": label,
263
+ "msa": "|".join(msa),
264
+ "str_emb": str_emb
265
+ }
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36c321c586e24f63078e0c0a9afb741007dde6f9b440a3f6660d779e5adc4278
3
+ size 6534162
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f9975739bfad8922d6ba9a0dc85eb1e59eb3c400472f26fd7317ab709e6e4a1
3
+ size 50351780
data/valid-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f45aac6c499611f576d85b940e4d7699e884c71ca7f156025a146fd97f8ad400
3
+ size 6688323
md5_to_str.fasta ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:585de375b809b1d61135b4f64cdd861aa786d01ad203fca8ff97382c0c70cb41
3
+ size 14453453
msa.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85907dcd992577756a0a2b49694bc403c97eb895f17740f59f95681cdb147fb6
3
+ size 3065968640