Thejina commited on
Commit
b9abe25
·
verified ·
1 Parent(s): 28d5050

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,850 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:5822
11
+ - loss:MatryoshkaLoss
12
+ - loss:MultipleNegativesRankingLoss
13
+ base_model: nomic-ai/nomic-embed-text-v1.5
14
+ widget:
15
+ - source_sentence: "submitted to the CIA for each year.” Id. at 1–2. On July 22,\
16
+ \ 2010, the CIA responded to this \nrequest, stating “[w]e . . . have determined\
17
+ \ that our record systems are not configured in a way \nthat would allow us to\
18
+ \ perform a search reasonably calculated to lead to the responsive record \nwithout\
19
+ \ an unreasonable effort.” First Lutz Decl. Ex. L at 1, No. 11-444, ECF No. 20-3.\
20
+ \ As a"
21
+ sentences:
22
+ - How many instances of individual's names does the plaintiff point to?
23
+ - What date did the CIA respond to the request?
24
+ - What phrase does the Bar propose to delete references to in the Preamble to Chapter
25
+ 4?
26
+ - source_sentence: "City Department of Education, the self-represented plaintiff \n\
27
+ submitted a filing containing hallucinations. No. 24-cv-04232, \n \n20 \n2024\
28
+ \ WL 3460049, at *7 (S.D.N.Y. July 18, 2024) (unpublished \nopinion). The court\
29
+ \ noted that “[s]anctions may be imposed for \nsubmitting false and nonexistent\
30
+ \ legal authority to the [c]ourt.” Id. \nHowever, the court declined to impose\
31
+ \ sanctions due to the"
32
+ sentences:
33
+ - In which sections of their opposition does the plaintiff discuss the deliberative-process
34
+ privilege?
35
+ - Who submitted the filing containing hallucinations?
36
+ - When did the plaintiff file a motion?
37
+ - source_sentence: "§ 424 and Exemption 3; Exemption 5; and/or Exemption 6. See Second\
38
+ \ Williams Decl. Ex. A. \n120 \n \nTherefore, the Court need not decide whether\
39
+ \ the DIA has the independent authority to invoke \nthe National Security Act\
40
+ \ as an Exemption 3 withholding statute. \n3. \nODNI \nFinally, the plaintiff\
41
+ \ challenges the ODNI’s decision to withhold certain portions of e-"
42
+ sentences:
43
+ - How many counts did EPIC bring related to the APA?
44
+ - Which organization's decision is being challenged by the plaintiff?
45
+ - Does the Government agree with EPIC's claim about their Answer?
46
+ - source_sentence: "confidentiality agreement/order, that remain following those discussions.\
47
+ \ This is a \nfinal report and notice of exceptions shall be filed within three\
48
+ \ days of the date of \nthis report, pursuant to Court of Chancery Rule 144(d)(2),\
49
+ \ given the expedited and \nsummary nature of Section 220 proceedings. \n \n\
50
+ \ \n \n \n \n \n \nRespectfully, \n \n \n \n \n \n \n \n \n/s/ Patricia W. Griffin"
51
+ sentences:
52
+ - Who signed this document?
53
+ - Did Mr. Mooney allege that the video was altered or tampered with?
54
+ - Did the plaintiff report the defendant at that time?
55
+ - source_sentence: "such an argument, and she does not offer any case law, cites to\
56
+ \ secondary sources, dictionaries \nor grammatical texts, arguments by analogy,\
57
+ \ or other citations, except for the mere assertion \nthat defendant failed to\
58
+ \ move in a timely fashion after he was “on notice” of the ex parte order. \n\
59
+ A reviewing court is entitled to have issues clearly defined with relevant authority\
60
+ \ cited."
61
+ sentences:
62
+ - What page is Cross-MJAR's emphasis mentioned on?
63
+ - What mere assertion does she make?
64
+ - On what dates did the Commission meet in 2019?
65
+ pipeline_tag: sentence-similarity
66
+ library_name: sentence-transformers
67
+ metrics:
68
+ - cosine_accuracy@1
69
+ - cosine_accuracy@3
70
+ - cosine_accuracy@5
71
+ - cosine_accuracy@10
72
+ - cosine_precision@1
73
+ - cosine_precision@3
74
+ - cosine_precision@5
75
+ - cosine_precision@10
76
+ - cosine_recall@1
77
+ - cosine_recall@3
78
+ - cosine_recall@5
79
+ - cosine_recall@10
80
+ - cosine_ndcg@10
81
+ - cosine_mrr@10
82
+ - cosine_map@100
83
+ model-index:
84
+ - name: nomic-embed-text-v1.5
85
+ results:
86
+ - task:
87
+ type: information-retrieval
88
+ name: Information Retrieval
89
+ dataset:
90
+ name: dim 768
91
+ type: dim_768
92
+ metrics:
93
+ - type: cosine_accuracy@1
94
+ value: 0.5486862442040186
95
+ name: Cosine Accuracy@1
96
+ - type: cosine_accuracy@3
97
+ value: 0.5965996908809892
98
+ name: Cosine Accuracy@3
99
+ - type: cosine_accuracy@5
100
+ value: 0.7017001545595054
101
+ name: Cosine Accuracy@5
102
+ - type: cosine_accuracy@10
103
+ value: 0.7697063369397218
104
+ name: Cosine Accuracy@10
105
+ - type: cosine_precision@1
106
+ value: 0.5486862442040186
107
+ name: Cosine Precision@1
108
+ - type: cosine_precision@3
109
+ value: 0.5239567233384853
110
+ name: Cosine Precision@3
111
+ - type: cosine_precision@5
112
+ value: 0.40989180834621336
113
+ name: Cosine Precision@5
114
+ - type: cosine_precision@10
115
+ value: 0.24142194744976814
116
+ name: Cosine Precision@10
117
+ - type: cosine_recall@1
118
+ value: 0.19049459041731065
119
+ name: Cosine Recall@1
120
+ - type: cosine_recall@3
121
+ value: 0.5101751674394642
122
+ name: Cosine Recall@3
123
+ - type: cosine_recall@5
124
+ value: 0.6503091190108191
125
+ name: Cosine Recall@5
126
+ - type: cosine_recall@10
127
+ value: 0.7595311695002576
128
+ name: Cosine Recall@10
129
+ - type: cosine_ndcg@10
130
+ value: 0.6615339195276682
131
+ name: Cosine Ndcg@10
132
+ - type: cosine_mrr@10
133
+ value: 0.6004440519123668
134
+ name: Cosine Mrr@10
135
+ - type: cosine_map@100
136
+ value: 0.6427552042140723
137
+ name: Cosine Map@100
138
+ - task:
139
+ type: information-retrieval
140
+ name: Information Retrieval
141
+ dataset:
142
+ name: dim 512
143
+ type: dim_512
144
+ metrics:
145
+ - type: cosine_accuracy@1
146
+ value: 0.5409582689335394
147
+ name: Cosine Accuracy@1
148
+ - type: cosine_accuracy@3
149
+ value: 0.58887171561051
150
+ name: Cosine Accuracy@3
151
+ - type: cosine_accuracy@5
152
+ value: 0.6924265842349304
153
+ name: Cosine Accuracy@5
154
+ - type: cosine_accuracy@10
155
+ value: 0.7743431221020093
156
+ name: Cosine Accuracy@10
157
+ - type: cosine_precision@1
158
+ value: 0.5409582689335394
159
+ name: Cosine Precision@1
160
+ - type: cosine_precision@3
161
+ value: 0.5172591447707368
162
+ name: Cosine Precision@3
163
+ - type: cosine_precision@5
164
+ value: 0.4034003091190108
165
+ name: Cosine Precision@5
166
+ - type: cosine_precision@10
167
+ value: 0.24188562596599691
168
+ name: Cosine Precision@10
169
+ - type: cosine_recall@1
170
+ value: 0.18740340030911898
171
+ name: Cosine Recall@1
172
+ - type: cosine_recall@3
173
+ value: 0.5054095826893354
174
+ name: Cosine Recall@3
175
+ - type: cosine_recall@5
176
+ value: 0.6411643482740855
177
+ name: Cosine Recall@5
178
+ - type: cosine_recall@10
179
+ value: 0.7622359608449253
180
+ name: Cosine Recall@10
181
+ - type: cosine_ndcg@10
182
+ value: 0.6576404555647709
183
+ name: Cosine Ndcg@10
184
+ - type: cosine_mrr@10
185
+ value: 0.5934416476533937
186
+ name: Cosine Mrr@10
187
+ - type: cosine_map@100
188
+ value: 0.6355153178607286
189
+ name: Cosine Map@100
190
+ - task:
191
+ type: information-retrieval
192
+ name: Information Retrieval
193
+ dataset:
194
+ name: dim 256
195
+ type: dim_256
196
+ metrics:
197
+ - type: cosine_accuracy@1
198
+ value: 0.508500772797527
199
+ name: Cosine Accuracy@1
200
+ - type: cosine_accuracy@3
201
+ value: 0.5564142194744977
202
+ name: Cosine Accuracy@3
203
+ - type: cosine_accuracy@5
204
+ value: 0.6707882534775889
205
+ name: Cosine Accuracy@5
206
+ - type: cosine_accuracy@10
207
+ value: 0.7449768160741885
208
+ name: Cosine Accuracy@10
209
+ - type: cosine_precision@1
210
+ value: 0.508500772797527
211
+ name: Cosine Precision@1
212
+ - type: cosine_precision@3
213
+ value: 0.4873776403915508
214
+ name: Cosine Precision@3
215
+ - type: cosine_precision@5
216
+ value: 0.38639876352395675
217
+ name: Cosine Precision@5
218
+ - type: cosine_precision@10
219
+ value: 0.23122102009273574
220
+ name: Cosine Precision@10
221
+ - type: cosine_recall@1
222
+ value: 0.17671303451828954
223
+ name: Cosine Recall@1
224
+ - type: cosine_recall@3
225
+ value: 0.47707367336424517
226
+ name: Cosine Recall@3
227
+ - type: cosine_recall@5
228
+ value: 0.6141164348274084
229
+ name: Cosine Recall@5
230
+ - type: cosine_recall@10
231
+ value: 0.7257856774858321
232
+ name: Cosine Recall@10
233
+ - type: cosine_ndcg@10
234
+ value: 0.6257588263652936
235
+ name: Cosine Ndcg@10
236
+ - type: cosine_mrr@10
237
+ value: 0.562961531856431
238
+ name: Cosine Mrr@10
239
+ - type: cosine_map@100
240
+ value: 0.6091899586876254
241
+ name: Cosine Map@100
242
+ - task:
243
+ type: information-retrieval
244
+ name: Information Retrieval
245
+ dataset:
246
+ name: dim 128
247
+ type: dim_128
248
+ metrics:
249
+ - type: cosine_accuracy@1
250
+ value: 0.45131375579598143
251
+ name: Cosine Accuracy@1
252
+ - type: cosine_accuracy@3
253
+ value: 0.5054095826893354
254
+ name: Cosine Accuracy@3
255
+ - type: cosine_accuracy@5
256
+ value: 0.58887171561051
257
+ name: Cosine Accuracy@5
258
+ - type: cosine_accuracy@10
259
+ value: 0.6862442040185471
260
+ name: Cosine Accuracy@10
261
+ - type: cosine_precision@1
262
+ value: 0.45131375579598143
263
+ name: Cosine Precision@1
264
+ - type: cosine_precision@3
265
+ value: 0.437403400309119
266
+ name: Cosine Precision@3
267
+ - type: cosine_precision@5
268
+ value: 0.3415765069551777
269
+ name: Cosine Precision@5
270
+ - type: cosine_precision@10
271
+ value: 0.21298299845440496
272
+ name: Cosine Precision@10
273
+ - type: cosine_recall@1
274
+ value: 0.15700669757856775
275
+ name: Cosine Recall@1
276
+ - type: cosine_recall@3
277
+ value: 0.4282586295723854
278
+ name: Cosine Recall@3
279
+ - type: cosine_recall@5
280
+ value: 0.5426326635754766
281
+ name: Cosine Recall@5
282
+ - type: cosine_recall@10
283
+ value: 0.6720762493560021
284
+ name: Cosine Recall@10
285
+ - type: cosine_ndcg@10
286
+ value: 0.5679548352076085
287
+ name: Cosine Ndcg@10
288
+ - type: cosine_mrr@10
289
+ value: 0.503881160913618
290
+ name: Cosine Mrr@10
291
+ - type: cosine_map@100
292
+ value: 0.5511797935827811
293
+ name: Cosine Map@100
294
+ - task:
295
+ type: information-retrieval
296
+ name: Information Retrieval
297
+ dataset:
298
+ name: dim 64
299
+ type: dim_64
300
+ metrics:
301
+ - type: cosine_accuracy@1
302
+ value: 0.35239567233384855
303
+ name: Cosine Accuracy@1
304
+ - type: cosine_accuracy@3
305
+ value: 0.3894899536321484
306
+ name: Cosine Accuracy@3
307
+ - type: cosine_accuracy@5
308
+ value: 0.47295208655332305
309
+ name: Cosine Accuracy@5
310
+ - type: cosine_accuracy@10
311
+ value: 0.5641421947449768
312
+ name: Cosine Accuracy@10
313
+ - type: cosine_precision@1
314
+ value: 0.35239567233384855
315
+ name: Cosine Precision@1
316
+ - type: cosine_precision@3
317
+ value: 0.33900051519835134
318
+ name: Cosine Precision@3
319
+ - type: cosine_precision@5
320
+ value: 0.26955177743431225
321
+ name: Cosine Precision@5
322
+ - type: cosine_precision@10
323
+ value: 0.1723338485316847
324
+ name: Cosine Precision@10
325
+ - type: cosine_recall@1
326
+ value: 0.12171561051004637
327
+ name: Cosine Recall@1
328
+ - type: cosine_recall@3
329
+ value: 0.33217413704276144
330
+ name: Cosine Recall@3
331
+ - type: cosine_recall@5
332
+ value: 0.4310922205048943
333
+ name: Cosine Recall@5
334
+ - type: cosine_recall@10
335
+ value: 0.5446934569809376
336
+ name: Cosine Recall@10
337
+ - type: cosine_ndcg@10
338
+ value: 0.45200452556542003
339
+ name: Cosine Ndcg@10
340
+ - type: cosine_mrr@10
341
+ value: 0.39659662422413555
342
+ name: Cosine Mrr@10
343
+ - type: cosine_map@100
344
+ value: 0.44614347894124107
345
+ name: Cosine Map@100
346
+ ---
347
+
348
+ # nomic-embed-text-v1.5
349
+
350
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
351
+
352
+ ## Model Details
353
+
354
+ ### Model Description
355
+ - **Model Type:** Sentence Transformer
356
+ - **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision a03db6748c80237063eb0546ac6b627eca2318cb -->
357
+ - **Maximum Sequence Length:** 8192 tokens
358
+ - **Output Dimensionality:** 768 dimensions
359
+ - **Similarity Function:** Cosine Similarity
360
+ - **Training Dataset:**
361
+ - json
362
+ - **Language:** en
363
+ - **License:** apache-2.0
364
+
365
+ ### Model Sources
366
+
367
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
368
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
369
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
370
+
371
+ ### Full Model Architecture
372
+
373
+ ```
374
+ SentenceTransformer(
375
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
376
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
377
+ )
378
+ ```
379
+
380
+ ## Usage
381
+
382
+ ### Direct Usage (Sentence Transformers)
383
+
384
+ First install the Sentence Transformers library:
385
+
386
+ ```bash
387
+ pip install -U sentence-transformers
388
+ ```
389
+
390
+ Then you can load this model and run inference.
391
+ ```python
392
+ from sentence_transformers import SentenceTransformer
393
+
394
+ # Download from the 🤗 Hub
395
+ model = SentenceTransformer("Thejina/nomic-embed-text-finetuned")
396
+ # Run inference
397
+ sentences = [
398
+ 'such an argument, and she does not offer any case law, cites to secondary sources, dictionaries \nor grammatical texts, arguments by analogy, or other citations, except for the mere assertion \nthat defendant failed to move in a timely fashion after he was “on notice” of the ex parte order. \nA reviewing court is entitled to have issues clearly defined with relevant authority cited.',
399
+ 'What mere assertion does she make?',
400
+ "What page is Cross-MJAR's emphasis mentioned on?",
401
+ ]
402
+ embeddings = model.encode(sentences)
403
+ print(embeddings.shape)
404
+ # [3, 768]
405
+
406
+ # Get the similarity scores for the embeddings
407
+ similarities = model.similarity(embeddings, embeddings)
408
+ print(similarities.shape)
409
+ # [3, 3]
410
+ ```
411
+
412
+ <!--
413
+ ### Direct Usage (Transformers)
414
+
415
+ <details><summary>Click to see the direct usage in Transformers</summary>
416
+
417
+ </details>
418
+ -->
419
+
420
+ <!--
421
+ ### Downstream Usage (Sentence Transformers)
422
+
423
+ You can finetune this model on your own dataset.
424
+
425
+ <details><summary>Click to expand</summary>
426
+
427
+ </details>
428
+ -->
429
+
430
+ <!--
431
+ ### Out-of-Scope Use
432
+
433
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
434
+ -->
435
+
436
+ ## Evaluation
437
+
438
+ ### Metrics
439
+
440
+ #### Information Retrieval
441
+
442
+ * Dataset: `dim_768`
443
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
444
+ ```json
445
+ {
446
+ "truncate_dim": 768
447
+ }
448
+ ```
449
+
450
+ | Metric | Value |
451
+ |:--------------------|:-----------|
452
+ | cosine_accuracy@1 | 0.5487 |
453
+ | cosine_accuracy@3 | 0.5966 |
454
+ | cosine_accuracy@5 | 0.7017 |
455
+ | cosine_accuracy@10 | 0.7697 |
456
+ | cosine_precision@1 | 0.5487 |
457
+ | cosine_precision@3 | 0.524 |
458
+ | cosine_precision@5 | 0.4099 |
459
+ | cosine_precision@10 | 0.2414 |
460
+ | cosine_recall@1 | 0.1905 |
461
+ | cosine_recall@3 | 0.5102 |
462
+ | cosine_recall@5 | 0.6503 |
463
+ | cosine_recall@10 | 0.7595 |
464
+ | **cosine_ndcg@10** | **0.6615** |
465
+ | cosine_mrr@10 | 0.6004 |
466
+ | cosine_map@100 | 0.6428 |
467
+
468
+ #### Information Retrieval
469
+
470
+ * Dataset: `dim_512`
471
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
472
+ ```json
473
+ {
474
+ "truncate_dim": 512
475
+ }
476
+ ```
477
+
478
+ | Metric | Value |
479
+ |:--------------------|:-----------|
480
+ | cosine_accuracy@1 | 0.541 |
481
+ | cosine_accuracy@3 | 0.5889 |
482
+ | cosine_accuracy@5 | 0.6924 |
483
+ | cosine_accuracy@10 | 0.7743 |
484
+ | cosine_precision@1 | 0.541 |
485
+ | cosine_precision@3 | 0.5173 |
486
+ | cosine_precision@5 | 0.4034 |
487
+ | cosine_precision@10 | 0.2419 |
488
+ | cosine_recall@1 | 0.1874 |
489
+ | cosine_recall@3 | 0.5054 |
490
+ | cosine_recall@5 | 0.6412 |
491
+ | cosine_recall@10 | 0.7622 |
492
+ | **cosine_ndcg@10** | **0.6576** |
493
+ | cosine_mrr@10 | 0.5934 |
494
+ | cosine_map@100 | 0.6355 |
495
+
496
+ #### Information Retrieval
497
+
498
+ * Dataset: `dim_256`
499
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
500
+ ```json
501
+ {
502
+ "truncate_dim": 256
503
+ }
504
+ ```
505
+
506
+ | Metric | Value |
507
+ |:--------------------|:-----------|
508
+ | cosine_accuracy@1 | 0.5085 |
509
+ | cosine_accuracy@3 | 0.5564 |
510
+ | cosine_accuracy@5 | 0.6708 |
511
+ | cosine_accuracy@10 | 0.745 |
512
+ | cosine_precision@1 | 0.5085 |
513
+ | cosine_precision@3 | 0.4874 |
514
+ | cosine_precision@5 | 0.3864 |
515
+ | cosine_precision@10 | 0.2312 |
516
+ | cosine_recall@1 | 0.1767 |
517
+ | cosine_recall@3 | 0.4771 |
518
+ | cosine_recall@5 | 0.6141 |
519
+ | cosine_recall@10 | 0.7258 |
520
+ | **cosine_ndcg@10** | **0.6258** |
521
+ | cosine_mrr@10 | 0.563 |
522
+ | cosine_map@100 | 0.6092 |
523
+
524
+ #### Information Retrieval
525
+
526
+ * Dataset: `dim_128`
527
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
528
+ ```json
529
+ {
530
+ "truncate_dim": 128
531
+ }
532
+ ```
533
+
534
+ | Metric | Value |
535
+ |:--------------------|:----------|
536
+ | cosine_accuracy@1 | 0.4513 |
537
+ | cosine_accuracy@3 | 0.5054 |
538
+ | cosine_accuracy@5 | 0.5889 |
539
+ | cosine_accuracy@10 | 0.6862 |
540
+ | cosine_precision@1 | 0.4513 |
541
+ | cosine_precision@3 | 0.4374 |
542
+ | cosine_precision@5 | 0.3416 |
543
+ | cosine_precision@10 | 0.213 |
544
+ | cosine_recall@1 | 0.157 |
545
+ | cosine_recall@3 | 0.4283 |
546
+ | cosine_recall@5 | 0.5426 |
547
+ | cosine_recall@10 | 0.6721 |
548
+ | **cosine_ndcg@10** | **0.568** |
549
+ | cosine_mrr@10 | 0.5039 |
550
+ | cosine_map@100 | 0.5512 |
551
+
552
+ #### Information Retrieval
553
+
554
+ * Dataset: `dim_64`
555
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
556
+ ```json
557
+ {
558
+ "truncate_dim": 64
559
+ }
560
+ ```
561
+
562
+ | Metric | Value |
563
+ |:--------------------|:----------|
564
+ | cosine_accuracy@1 | 0.3524 |
565
+ | cosine_accuracy@3 | 0.3895 |
566
+ | cosine_accuracy@5 | 0.473 |
567
+ | cosine_accuracy@10 | 0.5641 |
568
+ | cosine_precision@1 | 0.3524 |
569
+ | cosine_precision@3 | 0.339 |
570
+ | cosine_precision@5 | 0.2696 |
571
+ | cosine_precision@10 | 0.1723 |
572
+ | cosine_recall@1 | 0.1217 |
573
+ | cosine_recall@3 | 0.3322 |
574
+ | cosine_recall@5 | 0.4311 |
575
+ | cosine_recall@10 | 0.5447 |
576
+ | **cosine_ndcg@10** | **0.452** |
577
+ | cosine_mrr@10 | 0.3966 |
578
+ | cosine_map@100 | 0.4461 |
579
+
580
+ <!--
581
+ ## Bias, Risks and Limitations
582
+
583
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
584
+ -->
585
+
586
+ <!--
587
+ ### Recommendations
588
+
589
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
590
+ -->
591
+
592
+ ## Training Details
593
+
594
+ ### Training Dataset
595
+
596
+ #### json
597
+
598
+ * Dataset: json
599
+ * Size: 5,822 training samples
600
+ * Columns: <code>positive</code> and <code>anchor</code>
601
+ * Approximate statistics based on the first 1000 samples:
602
+ | | positive | anchor |
603
+ |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
604
+ | type | string | string |
605
+ | details | <ul><li>min: 46 tokens</li><li>mean: 91.09 tokens</li><li>max: 324 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.89 tokens</li><li>max: 43 tokens</li></ul> |
606
+ * Samples:
607
+ | positive | anchor |
608
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
609
+ | <code>functional test, too. Id. at 89–90. Still, the Court made clear that this functional test was “not <br>relevant.” Id. at 90. So, just as in Energy Research, its application of the functional test was <br>dicta. And because this discussion relied on the dicta from Energy Research, this was dicta <br>upon dicta. <br> <br> The Government is thus imprecise when it asserts as the “law of the case” that the</code> | <code>What page is the functional test mentioned as 'not relevant'?</code> |
610
+ | <code>authenticated through his testimony under Maryland Rule 5-901(b)(1) as a witness with <br>personal knowledge of the events. <br>- 6 - <br>The part of the video depicting the shooting was properly authenticated through <br>circumstantial evidence under Maryland Rule 5-901(b)(4), as there was sufficient <br>circumstantial evidence from which a reasonable juror could have inferred that the video</code> | <code>Which part of the video was authenticated?</code> |
611
+ | <code>KLAN202300916 <br> <br> <br> <br> <br>9<br>Los derechos morales, a su vez, están fundamentalmente <br>protegidos por la legislación estatal. Esta reconoce los derechos de <br>los autores como exclusivos de estos y los protege no solo en <br>beneficio propio, sino también de la sociedad por la contribución <br>social y cultural que históricamente se le ha reconocido a la</code> | <code>¿En beneficio de quién se protegen los derechos de los autores?</code> |
612
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
613
+ ```json
614
+ {
615
+ "loss": "MultipleNegativesRankingLoss",
616
+ "matryoshka_dims": [
617
+ 768,
618
+ 512,
619
+ 256,
620
+ 128,
621
+ 64
622
+ ],
623
+ "matryoshka_weights": [
624
+ 1,
625
+ 1,
626
+ 1,
627
+ 1,
628
+ 1
629
+ ],
630
+ "n_dims_per_step": -1
631
+ }
632
+ ```
633
+
634
+ ### Training Hyperparameters
635
+ #### Non-Default Hyperparameters
636
+
637
+ - `eval_strategy`: epoch
638
+ - `per_device_train_batch_size`: 32
639
+ - `per_device_eval_batch_size`: 16
640
+ - `gradient_accumulation_steps`: 16
641
+ - `learning_rate`: 2e-05
642
+ - `num_train_epochs`: 4
643
+ - `lr_scheduler_type`: cosine
644
+ - `warmup_ratio`: 0.1
645
+ - `bf16`: True
646
+ - `load_best_model_at_end`: True
647
+ - `optim`: adamw_torch_fused
648
+ - `batch_sampler`: no_duplicates
649
+
650
+ #### All Hyperparameters
651
+ <details><summary>Click to expand</summary>
652
+
653
+ - `overwrite_output_dir`: False
654
+ - `do_predict`: False
655
+ - `eval_strategy`: epoch
656
+ - `prediction_loss_only`: True
657
+ - `per_device_train_batch_size`: 32
658
+ - `per_device_eval_batch_size`: 16
659
+ - `per_gpu_train_batch_size`: None
660
+ - `per_gpu_eval_batch_size`: None
661
+ - `gradient_accumulation_steps`: 16
662
+ - `eval_accumulation_steps`: None
663
+ - `torch_empty_cache_steps`: None
664
+ - `learning_rate`: 2e-05
665
+ - `weight_decay`: 0.0
666
+ - `adam_beta1`: 0.9
667
+ - `adam_beta2`: 0.999
668
+ - `adam_epsilon`: 1e-08
669
+ - `max_grad_norm`: 1.0
670
+ - `num_train_epochs`: 4
671
+ - `max_steps`: -1
672
+ - `lr_scheduler_type`: cosine
673
+ - `lr_scheduler_kwargs`: {}
674
+ - `warmup_ratio`: 0.1
675
+ - `warmup_steps`: 0
676
+ - `log_level`: passive
677
+ - `log_level_replica`: warning
678
+ - `log_on_each_node`: True
679
+ - `logging_nan_inf_filter`: True
680
+ - `save_safetensors`: True
681
+ - `save_on_each_node`: False
682
+ - `save_only_model`: False
683
+ - `restore_callback_states_from_checkpoint`: False
684
+ - `no_cuda`: False
685
+ - `use_cpu`: False
686
+ - `use_mps_device`: False
687
+ - `seed`: 42
688
+ - `data_seed`: None
689
+ - `jit_mode_eval`: False
690
+ - `use_ipex`: False
691
+ - `bf16`: True
692
+ - `fp16`: False
693
+ - `fp16_opt_level`: O1
694
+ - `half_precision_backend`: auto
695
+ - `bf16_full_eval`: False
696
+ - `fp16_full_eval`: False
697
+ - `tf32`: None
698
+ - `local_rank`: 0
699
+ - `ddp_backend`: None
700
+ - `tpu_num_cores`: None
701
+ - `tpu_metrics_debug`: False
702
+ - `debug`: []
703
+ - `dataloader_drop_last`: False
704
+ - `dataloader_num_workers`: 0
705
+ - `dataloader_prefetch_factor`: None
706
+ - `past_index`: -1
707
+ - `disable_tqdm`: False
708
+ - `remove_unused_columns`: True
709
+ - `label_names`: None
710
+ - `load_best_model_at_end`: True
711
+ - `ignore_data_skip`: False
712
+ - `fsdp`: []
713
+ - `fsdp_min_num_params`: 0
714
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
715
+ - `tp_size`: 0
716
+ - `fsdp_transformer_layer_cls_to_wrap`: None
717
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
718
+ - `deepspeed`: None
719
+ - `label_smoothing_factor`: 0.0
720
+ - `optim`: adamw_torch_fused
721
+ - `optim_args`: None
722
+ - `adafactor`: False
723
+ - `group_by_length`: False
724
+ - `length_column_name`: length
725
+ - `ddp_find_unused_parameters`: None
726
+ - `ddp_bucket_cap_mb`: None
727
+ - `ddp_broadcast_buffers`: False
728
+ - `dataloader_pin_memory`: True
729
+ - `dataloader_persistent_workers`: False
730
+ - `skip_memory_metrics`: True
731
+ - `use_legacy_prediction_loop`: False
732
+ - `push_to_hub`: False
733
+ - `resume_from_checkpoint`: None
734
+ - `hub_model_id`: None
735
+ - `hub_strategy`: every_save
736
+ - `hub_private_repo`: None
737
+ - `hub_always_push`: False
738
+ - `gradient_checkpointing`: False
739
+ - `gradient_checkpointing_kwargs`: None
740
+ - `include_inputs_for_metrics`: False
741
+ - `include_for_metrics`: []
742
+ - `eval_do_concat_batches`: True
743
+ - `fp16_backend`: auto
744
+ - `push_to_hub_model_id`: None
745
+ - `push_to_hub_organization`: None
746
+ - `mp_parameters`:
747
+ - `auto_find_batch_size`: False
748
+ - `full_determinism`: False
749
+ - `torchdynamo`: None
750
+ - `ray_scope`: last
751
+ - `ddp_timeout`: 1800
752
+ - `torch_compile`: False
753
+ - `torch_compile_backend`: None
754
+ - `torch_compile_mode`: None
755
+ - `include_tokens_per_second`: False
756
+ - `include_num_input_tokens_seen`: False
757
+ - `neftune_noise_alpha`: None
758
+ - `optim_target_modules`: None
759
+ - `batch_eval_metrics`: False
760
+ - `eval_on_start`: False
761
+ - `use_liger_kernel`: False
762
+ - `eval_use_gather_object`: False
763
+ - `average_tokens_across_devices`: False
764
+ - `prompts`: None
765
+ - `batch_sampler`: no_duplicates
766
+ - `multi_dataset_batch_sampler`: proportional
767
+
768
+ </details>
769
+
770
+ ### Training Logs
771
+ | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
772
+ |:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
773
+ | 0.8791 | 10 | 69.7578 | - | - | - | - | - |
774
+ | 1.0 | 12 | - | 0.6178 | 0.6069 | 0.5742 | 0.5088 | 0.4115 |
775
+ | 1.7033 | 20 | 28.4334 | - | - | - | - | - |
776
+ | 2.0 | 24 | - | 0.6589 | 0.6509 | 0.6268 | 0.5616 | 0.4494 |
777
+ | 2.5275 | 30 | 20.1123 | - | - | - | - | - |
778
+ | 3.0 | 36 | - | 0.6621 | 0.6573 | 0.6263 | 0.5677 | 0.4508 |
779
+ | 3.3516 | 40 | 16.5444 | - | - | - | - | - |
780
+ | **3.7033** | **44** | **-** | **0.6615** | **0.6576** | **0.6258** | **0.568** | **0.452** |
781
+
782
+ * The bold row denotes the saved checkpoint.
783
+
784
+ ### Framework Versions
785
+ - Python: 3.11.12
786
+ - Sentence Transformers: 4.1.0
787
+ - Transformers: 4.51.3
788
+ - PyTorch: 2.6.0+cu124
789
+ - Accelerate: 1.6.0
790
+ - Datasets: 3.6.0
791
+ - Tokenizers: 0.21.1
792
+
793
+ ## Citation
794
+
795
+ ### BibTeX
796
+
797
+ #### Sentence Transformers
798
+ ```bibtex
799
+ @inproceedings{reimers-2019-sentence-bert,
800
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
801
+ author = "Reimers, Nils and Gurevych, Iryna",
802
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
803
+ month = "11",
804
+ year = "2019",
805
+ publisher = "Association for Computational Linguistics",
806
+ url = "https://arxiv.org/abs/1908.10084",
807
+ }
808
+ ```
809
+
810
+ #### MatryoshkaLoss
811
+ ```bibtex
812
+ @misc{kusupati2024matryoshka,
813
+ title={Matryoshka Representation Learning},
814
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
815
+ year={2024},
816
+ eprint={2205.13147},
817
+ archivePrefix={arXiv},
818
+ primaryClass={cs.LG}
819
+ }
820
+ ```
821
+
822
+ #### MultipleNegativesRankingLoss
823
+ ```bibtex
824
+ @misc{henderson2017efficient,
825
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
826
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
827
+ year={2017},
828
+ eprint={1705.00652},
829
+ archivePrefix={arXiv},
830
+ primaryClass={cs.CL}
831
+ }
832
+ ```
833
+
834
+ <!--
835
+ ## Glossary
836
+
837
+ *Clearly define terms in order to be accessible across audiences.*
838
+ -->
839
+
840
+ <!--
841
+ ## Model Card Authors
842
+
843
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
844
+ -->
845
+
846
+ <!--
847
+ ## Model Card Contact
848
+
849
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
850
+ -->
config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "swiglu",
3
+ "architectures": [
4
+ "NomicBertModel"
5
+ ],
6
+ "attn_pdrop": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "nomic-ai/nomic-bert-2048--configuration_hf_nomic_bert.NomicBertConfig",
9
+ "AutoModel": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertModel",
10
+ "AutoModelForMaskedLM": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertForPreTraining"
11
+ },
12
+ "bos_token_id": null,
13
+ "causal": false,
14
+ "dense_seq_output": true,
15
+ "embd_pdrop": 0.0,
16
+ "eos_token_id": null,
17
+ "fused_bias_fc": true,
18
+ "fused_dropout_add_ln": true,
19
+ "initializer_range": 0.02,
20
+ "layer_norm_epsilon": 1e-12,
21
+ "max_trained_positions": 2048,
22
+ "mlp_fc1_bias": false,
23
+ "mlp_fc2_bias": false,
24
+ "model_type": "nomic_bert",
25
+ "n_embd": 768,
26
+ "n_head": 12,
27
+ "n_inner": 3072,
28
+ "n_layer": 12,
29
+ "n_positions": 8192,
30
+ "pad_vocab_size_multiple": 64,
31
+ "parallel_block": false,
32
+ "parallel_block_tied_norm": false,
33
+ "prenorm": false,
34
+ "qkv_proj_bias": false,
35
+ "reorder_and_upcast_attn": false,
36
+ "resid_pdrop": 0.0,
37
+ "rotary_emb_base": 1000,
38
+ "rotary_emb_fraction": 1.0,
39
+ "rotary_emb_interleaved": false,
40
+ "rotary_emb_scale_base": null,
41
+ "rotary_scaling_factor": null,
42
+ "scale_attn_by_inverse_layer_idx": false,
43
+ "scale_attn_weights": true,
44
+ "summary_activation": null,
45
+ "summary_first_dropout": 0.0,
46
+ "summary_proj_to_labels": true,
47
+ "summary_type": "cls_index",
48
+ "summary_use_proj": true,
49
+ "torch_dtype": "float32",
50
+ "transformers_version": "4.51.3",
51
+ "type_vocab_size": 2,
52
+ "use_cache": true,
53
+ "use_flash_attn": true,
54
+ "use_rms_norm": false,
55
+ "use_xentropy": true,
56
+ "vocab_size": 30528
57
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:188b92cd53a4b5eeabb440a861943b7964db7f341518be40857cc4346fbb3f3d
3
+ size 546938168
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 8192,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff