root
commited on
Commit
·
b9ccadd
1
Parent(s):
eac0c56
update
Browse files- .gitattributes +1 -0
- README.md +123 -3
- config.json +3 -0
- fig/Metric.png +0 -0
- fig/logo.png +0 -0
- generation_config.json +3 -0
- model-00001-of-00031.safetensors +3 -0
- model-00002-of-00031.safetensors +3 -0
- model-00003-of-00031.safetensors +3 -0
- model-00004-of-00031.safetensors +3 -0
- model-00005-of-00031.safetensors +3 -0
- model-00006-of-00031.safetensors +3 -0
- model-00007-of-00031.safetensors +3 -0
- model-00008-of-00031.safetensors +3 -0
- model-00009-of-00031.safetensors +3 -0
- model-00010-of-00031.safetensors +3 -0
- model-00011-of-00031.safetensors +3 -0
- model-00012-of-00031.safetensors +3 -0
- model-00013-of-00031.safetensors +3 -0
- model-00014-of-00031.safetensors +3 -0
- model-00015-of-00031.safetensors +3 -0
- model-00016-of-00031.safetensors +3 -0
- model-00017-of-00031.safetensors +3 -0
- model-00018-of-00031.safetensors +3 -0
- model-00019-of-00031.safetensors +3 -0
- model-00020-of-00031.safetensors +3 -0
- model-00021-of-00031.safetensors +3 -0
- model-00022-of-00031.safetensors +3 -0
- model-00023-of-00031.safetensors +3 -0
- model-00024-of-00031.safetensors +3 -0
- model-00025-of-00031.safetensors +3 -0
- model-00026-of-00031.safetensors +3 -0
- model-00027-of-00031.safetensors +3 -0
- model-00028-of-00031.safetensors +3 -0
- model-00029-of-00031.safetensors +3 -0
- model-00030-of-00031.safetensors +3 -0
- model-00031-of-00031.safetensors +3 -0
- model.safetensors.index.json +3 -0
- special_tokens_map.json +3 -0
- tokenizer.json +3 -0
- tokenizer_config.json +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,123 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
metrics:
|
7 |
+
- accuracy
|
8 |
+
base_model:
|
9 |
+
- Qwen/Qwen2.5-72B
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
---
|
12 |
+
<div style="text-align: center;">
|
13 |
+
<h1>YiXin-Distill-Qwen-72B</h1>
|
14 |
+
<img src="./fig/logo.png" alt="YiXin Logo">
|
15 |
+
</div>
|
16 |
+
|
17 |
+
## Model Overview
|
18 |
+
|
19 |
+
**YiXin-Distill-Qwen-72B: A High-Performance Distilled Model for Mathematical and General Reasoning**, derived from Qwen2.5-72B using reinforcement learning. It is specifically optimized for mathematical reasoning and general knowledge tasks. Leveraging advanced distillation techniques, this model enhances reasoning capabilities while maintaining computational efficiency. Built upon the robust Qwen model foundation, it aims to achieve state-of-the-art performance across various benchmark evaluations.Our benchmark evaluations demonstrate that YiXin-Distill-Qwen-72B delivers strong performance, showing improvements over comparable distilled models in key mathematical and general reasoning tasks, with observed average improvements of 5 to 11 percentage points.
|
20 |
+
## Training Details
|
21 |
+
|
22 |
+
### Data Collection and Processing
|
23 |
+
|
24 |
+
YiXin-Distill-Qwen-72B is trained on a carefully curated, high-quality dataset designed to improve mathematical reasoning and general knowledge comprehension. The data pipeline follows a structured multi-stage approach to ensure optimal model performance while minimizing noise.
|
25 |
+
|
26 |
+
#### 1. **Dataset Aggregation**
|
27 |
+
|
28 |
+
- Built upon currently available high-quality open-source datasets.
|
29 |
+
- Covers multiple domains, including **mathematics and general knowledge**.
|
30 |
+
|
31 |
+
#### 2. **Data Filtering and Quality Assessment**
|
32 |
+
|
33 |
+
We implemented a comprehensive quality control framework utilizing DeepSeek-R1 as an LLM judge to evaluate data quality. The assessment criteria included:
|
34 |
+
- **Difficulty Level**: Data samples were categorized into simple, moderate, and difficult tiers to ensure balanced representation across complexity levels.
|
35 |
+
- **Ground Truth** Verification: We employed rigorous verification processes to ensure the correctness of answers within the dataset.
|
36 |
+
- **Quality Scoring**: Each prompt-response pair was evaluated based on its complexity, instructional clarity, and potential to enhance reasoning abilities.
|
37 |
+
- **Response Length Analysis**: Responses that failed to meet minimum length requirements were excluded, as they typically lacked sufficient information to provide meaningful training signals.
|
38 |
+
|
39 |
+
#### 3. **Validation and Refinement**
|
40 |
+
|
41 |
+
For subjective answers, we employed an LLM-based judge to validate response quality and relevance.
|
42 |
+
Mathematical content underwent additional validation procedures:
|
43 |
+
- Mathematical answers and their corresponding solutions were systematically validated.
|
44 |
+
- A critic model assessed each solution process to ensure logical consistency and correctness of mathematical reasoning.
|
45 |
+
- Solutions with logical gaps or incorrect reasoning patterns were either corrected or removed from the training set.
|
46 |
+
|
47 |
+
## Distillation Process
|
48 |
+
|
49 |
+
YiXin-Distill-Qwen-72B adopts a progressive two-stage distillation approach, iteratively refining model performance through intelligent data selection and optimization. The training framework continuously identifies and removes high-confidence samples—i.e., cases where the model already excels—to mitigate overfitting, while iteratively refining low-confidence samples to strengthen weak reasoning patterns. By leveraging multiple fine-tuning cycles and quality assessments, the model achieves a balanced enhancement of efficiency and accuracy across mathematical and general reasoning benchmarks.
|
50 |
+
|
51 |
+
## Evaluation Results
|
52 |
+
|
53 |
+
YiXin-Distill-Qwen-72B was benchmarked against multiple models, including QwQ-32B, DeepSeek-R1-Distill-Qwen-32B, and DeepSeek-R1-Distill-Llama-70B, DeepSeek-R1, across mathematical reasoning and general knowledge tasks:
|
54 |
+
|
55 |
+

|
56 |
+
|
57 |
+
| Metric | QwQ-32B | DeepSeek-R1-Distill-Qwen-32B | DeepSeek-R1-Distill-Llama-70B | DeepSeek-R1 | YiXin-Distill-Qwen-72B |
|
58 |
+
|---------------|-------------|-----------------------------|------------------------------|-------------|------------------------|
|
59 |
+
| MATH-500 | 96.2 | 91.2 | 94.0 | 94.4 | **97.0** |
|
60 |
+
| GPQA-Diamond | 62.6 | 62.1 | 62.6 | **74.8** | 69.2 |
|
61 |
+
| AIME-24 | 73.3 | 66.7 | 70.0 | **80.0** | 76.7 |
|
62 |
+
| AIME-25 | 63.3 | 60.0 | 46.7 | 63.3 | **73.3** |
|
63 |
+
| MMLU-Pro | 86.2 | 78.3 | 80.3 | 92.4 | **92.6** |
|
64 |
+
| **Average** | 76.3 | 71.7 | 70.7 | 81.0 | **81.8** |
|
65 |
+
|
66 |
+
YiXin-Distill-Qwen-72B demonstrates significant improvements across mathematical reasoning and general knowledge tasks.
|
67 |
+
|
68 |
+
## Quickstart
|
69 |
+
|
70 |
+
```python
|
71 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
72 |
+
model_name = "YiXin-AILab/YiXin-Distill-Qwen-72B"
|
73 |
+
model = AutoModelForCausalLM.from_pretrained(
|
74 |
+
model_name,
|
75 |
+
torch_dtype="auto",
|
76 |
+
device_map="auto"
|
77 |
+
)
|
78 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
79 |
+
prompt = "8+8=?"
|
80 |
+
messages = [
|
81 |
+
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
|
82 |
+
{"role": "user", "content": prompt}
|
83 |
+
]
|
84 |
+
text = tokenizer.apply_chat_template(
|
85 |
+
messages,
|
86 |
+
tokenize=False,
|
87 |
+
add_generation_prompt=True
|
88 |
+
)
|
89 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
90 |
+
generated_ids = model.generate(
|
91 |
+
**model_inputs,
|
92 |
+
max_new_tokens=512
|
93 |
+
)
|
94 |
+
generated_ids = [
|
95 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
96 |
+
]
|
97 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
98 |
+
```
|
99 |
+
|
100 |
+
## Limitations
|
101 |
+
|
102 |
+
Despite its strong performance, YiXin-Distill-Qwen-72B has certain limitations:
|
103 |
+
|
104 |
+
- **Potential Security Concerns:** YiXin-Distill-Qwen-72B may be vulnerable to adversarial attacks, prompt injection, and data leakage. Proper security measures are recommended for sensitive deployments.
|
105 |
+
- **Domain-Specific Biases:** Performance may vary across different domains, particularly those underrepresented in the training data.
|
106 |
+
- **Potential Loss in Distillation:** Some nuanced reasoning capabilities from the teacher model may be reduced during the distillation process.
|
107 |
+
|
108 |
+
## Citation
|
109 |
+
|
110 |
+
If you use YiXin-Distill-Qwen-72B in your research, please cite this work appropriately:
|
111 |
+
|
112 |
+
```bibtex
|
113 |
+
@misc{yixindistillqwen-72b,
|
114 |
+
title={YiXin-Distill-Qwen-72B: A High-Performance Distilled Model for Mathematical and General Reasoning},
|
115 |
+
author={YiXin-AILab},
|
116 |
+
year={2025},
|
117 |
+
url={https://huggingface.co/YiXin-AILab/YiXin-Distill-Qwen-72B}
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
## Acknowledgments
|
122 |
+
|
123 |
+
We acknowledge the contributions of the open-source community and researchers who have developed and maintained the Qwen and DeepSeek models. Their work has significantly advanced the field of large language model distillation and reasoning capabilities.
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4319e45a03aaab9cd6614c94cb2a6991476a242b2e21ec11baf52fbdb6e089c0
|
3 |
+
size 747
|
fig/Metric.png
ADDED
![]() |
fig/logo.png
ADDED
![]() |
generation_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:519a8db66fcfd0d18731eb75169ff8393a6f9c479fd50142fb5ee2580bb9ae64
|
3 |
+
size 243
|
model-00001-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:09e09d506a7a711b5b01ec12d09cf1fa6d9edf16b6a55f7f57711d34529ded4c
|
3 |
+
size 4548798728
|
model-00002-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa8d9fcab536a046f4f031ecdc970fee717de22b6573455a88253eb663391464
|
3 |
+
size 4964101384
|
model-00003-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cca657c26ef2fbbdf941a6d308578bd5e161ad1bbab4b1882d1820ab3b79c871
|
3 |
+
size 4781637328
|
model-00004-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85e1d86761d2f4a007f6d78a933e7c0b52f2b74c9cf8e9d3e8277a86acb0cdd9
|
3 |
+
size 4781670320
|
model-00005-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:388081e6bdb61dcd6534b7d7a06853b85607828619783679481e47924b62901f
|
3 |
+
size 4781670360
|
model-00006-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76c8519759aa97fb0f0674b1556bfee4306b7f14b276295cea696b28ea38fbdb
|
3 |
+
size 4964101416
|
model-00007-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e747b4af99107d61005b485ea25eed1e54f6c2d697f7e7bbe4e21658f84e2f21
|
3 |
+
size 4781637360
|
model-00008-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fa7bdd783a605b5d41d3b2d15b2005ca6e3e0db349997cebc8d9cc4be8aaf0af
|
3 |
+
size 4781670360
|
model-00009-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9de4b2d3ccfaa97179edab25690a2f1bfaebe03e625d02e83640bbfca2657b4
|
3 |
+
size 4781670360
|
model-00010-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:430683be7c7d3deee6990eae25bedf54f5bd5b06e4b88b6b291c923febd77a9a
|
3 |
+
size 4964101416
|
model-00011-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d10ea7fe295d170bd84ed6189cc2daa04c48eecf05a5fe2c593562e152036536
|
3 |
+
size 4781637360
|
model-00012-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f22b55b19b727fb738e86d1260266041228a708f814917fa988b27747c58084e
|
3 |
+
size 4781670360
|
model-00013-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56b8daf4c44562e86b83ee3c44cff426d783d966710e16e4887815ae48644b3c
|
3 |
+
size 4781670360
|
model-00014-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bc6830559585bf698d94682f9ae87f8bd4ead33bd39dcd1d0dce8bd066f45dcc
|
3 |
+
size 4964101416
|
model-00015-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69b70f4b86c627deaac6556f0143abd2bb72ffe126822e3aa910dbe4dff6ac66
|
3 |
+
size 4781637360
|
model-00016-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f2d1fa73a0573f500f04436b121990cda8eb4bfebd2fddf66121b48e08024915
|
3 |
+
size 4781670360
|
model-00017-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4704e67c824c24175204321dd435d1d96886c4cc5e4e286660ff5fb633940998
|
3 |
+
size 4781670360
|
model-00018-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:63850f3f3fa385ffa46064274755f655d3b562f719662db448f42f5eac5aa58f
|
3 |
+
size 4964101416
|
model-00019-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11aba4ce2329c80bbca3410a40fb24df690513cf9a732cf1f022632624c02ee1
|
3 |
+
size 4781637360
|
model-00020-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f437187b803388b05da11d22f3485fea40b8727f0ff840b6715f77432e9fa0b0
|
3 |
+
size 4781670360
|
model-00021-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:955b25dfc5ee66d2d364addb3fb5dce8834a6e00dbb463089a2c5f4346837f89
|
3 |
+
size 4781670360
|
model-00022-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:224493efc57ef47d21f993a528e01c1a8c29dee77a3a19856d377e35f5f2f129
|
3 |
+
size 4964101416
|
model-00023-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5831d6483a7b8a788968534353217849719f2117f106aae7139c96b924d6b155
|
3 |
+
size 4781637360
|
model-00024-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:813917da394eacac5e22e653e7c076c2f4ce3c2d55da11ab2dc6c33dfe136677
|
3 |
+
size 4781670360
|
model-00025-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a249a01b1912d5a4962bf4459d0a0b1f3d7d3a71e12c310bf2133342eec00a6a
|
3 |
+
size 4781670360
|
model-00026-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4563b0a2d0559b4dedd22f0a95d24915884d299b2552ea508ccf7b02204b60d5
|
3 |
+
size 4964101416
|
model-00027-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4670aea26ae91f740155a7906d506f87fd123f8b56d7d39e0cb834c6f9e7c177
|
3 |
+
size 4781637360
|
model-00028-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e8721ee5c2b128e9ddd815f44140e8b0afee395dc4029c4472043e6b1fbffa2
|
3 |
+
size 4781670360
|
model-00029-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b79ef188f705640408b726520162112a0d41aa0186e19a737ac4cb8e5258c8e
|
3 |
+
size 4781670360
|
model-00030-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:818904b5cf78b9541970e09a672c0b6c0548e2316a689c4b087f7c778d5fc560
|
3 |
+
size 3208747032
|
model-00031-of-00031.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a61ab4e32927a1889b5df4dc90d6764b414a9cb38e1370236ed11af69665e58
|
3 |
+
size 2491416704
|
model.safetensors.index.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1b2faa7a18678ae6b7a633f3b7b39df1390b48c7ce2bde9bb74509cb1d7a4e3
|
3 |
+
size 79025
|
special_tokens_map.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:acb949a6d246354249847f5e1f0dabe04c258271eab55a6d6fac251d7af41075
|
3 |
+
size 482
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4beb9052cf06eb2572cd7a3fa78a6be4b876d81f194c4e33b025e5f9792d6d38
|
3 |
+
size 11423171
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eefb79560c541df33926ab656fb2166bb4c68145f7633a544b3f030c0acc8b07
|
3 |
+
size 7166
|