ekurtic commited on
Commit
452cf7d
·
1 Parent(s): 0c2d61a
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - moe
7
+ - fp8
8
+ - vllm
9
+ ---
10
+
11
+ # Mixtral-8x22B-Instruct-v0.1-FP8
12
+
13
+ ## Model Overview
14
+ - **Model Architecture:** Mixtral-8x22B-Instruct-v0.1
15
+ - **Input:** Text
16
+ - **Output:** Text
17
+ - **Model Optimizations:**
18
+ - **Weight quantization:** FP8
19
+ - **Activation quantization:** FP8
20
+ - **Release Date:** 2/26/2025
21
+ - **Version:** 1.0
22
+ - **Model Developers:** Neural Magic
23
+
24
+ Quantized version of [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1).
25
+ It achieves an average score of 79.5 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 79.87.
26
+
27
+ ### Model Optimizations
28
+
29
+ This model was obtained by quantizing the weights and activations to FP8 data type, ready for inference with vLLM.
30
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized, except the MLP routers.
31
+
32
+ ## Deployment
33
+
34
+ ### Use with vLLM
35
+
36
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
37
+
38
+ ```python
39
+ from transformers import AutoTokenizer
40
+ from vllm import LLM, SamplingParams
41
+
42
+ max_model_len, tp_size = 4096, 4
43
+ model_name = "neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8"
44
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
45
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
46
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
47
+
48
+ messages_list = [
49
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
50
+ ]
51
+
52
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
53
+
54
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
55
+
56
+ generated_text = [output.outputs[0].text for output in outputs]
57
+ print(generated_text)
58
+ ```
59
+
60
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
61
+
62
+ ## Creation
63
+
64
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below with the following command:
65
+
66
+ ```bash
67
+ python quantize.py --model_path mistralai/Mixtral-8x22B-Instruct-v0.1 --quant_path "output_dir" --calib_size 128
68
+ ```
69
+
70
+
71
+ ```python
72
+ import argparse
73
+ from datasets import load_dataset
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+ from llmcompressor.modifiers.quantization import QuantizationModifier
76
+ from llmcompressor.transformers import oneshot
77
+ from llmcompressor.transformers.compression.helpers import calculate_offload_device_map
78
+ import torch
79
+ import os
80
+
81
+
82
+ def main():
83
+ # Set up command line argument parsing
84
+ parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
85
+ parser.add_argument('--model_id', type=str, required=True,
86
+ help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')
87
+ parser.add_argument('--save_path', type=str, default='.',
88
+ help='Custom path to save the quantized model. If not provided, will use model_name-FP8')
89
+ parser.add_argument('--calib_size', type=int, default=256)
90
+ args = parser.parse_args()
91
+
92
+ device_map = calculate_offload_device_map(
93
+ args.model_id,
94
+ reserve_for_hessians=False,
95
+ num_gpus=torch.cuda.device_count(),
96
+ trust_remote_code=True,
97
+ torch_dtype=torch.bfloat16,
98
+ )
99
+
100
+ model = AutoModelForCausalLM.from_pretrained(
101
+ args.model_id, device_map=device_map, torch_dtype=torch.bfloat16, trust_remote_code=True,
102
+ )
103
+ tokenizer = AutoTokenizer.from_pretrained(args.model_id)
104
+
105
+ NUM_CALIBRATION_SAMPLES = args.calib_size
106
+ DATASET_ID = "garage-bAInd/Open-Platypus"
107
+ DATASET_SPLIT = "train"
108
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
109
+ ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
110
+
111
+ def preprocess(example):
112
+ concat_txt = example["instruction"] + "\n" + example["output"]
113
+ return {"text": concat_txt}
114
+
115
+ ds = ds.map(preprocess)
116
+
117
+ def tokenize(sample):
118
+ return tokenizer(
119
+ sample["text"],
120
+ padding=False,
121
+ truncation=False,
122
+ add_special_tokens=True,
123
+ )
124
+
125
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
126
+
127
+ # Configure the quantization algorithm and scheme
128
+ recipe = QuantizationModifier(
129
+ targets="Linear", scheme="FP8", ignore=["lm_head", "re:.*block_sparse_moe.gate"]
130
+ )
131
+
132
+ # Apply quantization
133
+ oneshot(
134
+ model=model,
135
+ dataset=ds,
136
+ recipe=recipe,
137
+ num_calibration_samples=args.calib_size
138
+ )
139
+
140
+ save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8")
141
+ os.makedirs(save_path, exist_ok=True)
142
+
143
+ # Save to disk in compressed-tensors format
144
+ model.save_pretrained(save_path, save_compressed=True, skip_compression_stats=True)
145
+ tokenizer.save_pretrained(save_path)
146
+ print(f"Model and tokenizer saved to: {save_path}")
147
+
148
+ if __name__ == "__main__":
149
+ main()
150
+ ```
151
+
152
+ ## Evaluation
153
+
154
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard) using the following command:
155
+
156
+ OpenLLM Leaderboard V1:
157
+ ```
158
+ lm_eval \
159
+ --model vllm \
160
+ --model_args pretrained="neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
161
+ --tasks openllm \
162
+ --write_out \
163
+ --batch_size auto \
164
+ --output_path output_dir \
165
+ --show_config
166
+ ```
167
+
168
+
169
+ ### Accuracy
170
+
171
+ #### OpenLLM Leaderboard V1 evaluation scores
172
+
173
+ | Metric | mistralai/Mixtral-8x22B-Instruct-v0.1 | neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8 |
174
+ |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
175
+ | ARC-Challenge (Acc-Norm, 25-shot) | 73.29 | 73.29 |
176
+ | GSM8K (Strict-Match, 5-shot) | 85.06 | 84.08 |
177
+ | HellaSwag (Acc-Norm, 10-shot) | 88.94 | 88.85 |
178
+ | MMLU (Acc, 5-shot) | 77.77 | 77.62 |
179
+ | TruthfulQA (MC2, 0-shot) | 68.49 | 68.01 |
180
+ | Winogrande (Acc, 5-shot) | 85.64 | 85.16 |
181
+ | **Average Score** | **79.87** | **79.50** |
182
+ | **Recovery (%)** | **100.00** | **99.54** |
183
+
config.json ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mistralai/Mixtral-8x22B-Instruct-v0.1",
3
+ "architectures": [
4
+ "MixtralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 6144,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 16384,
14
+ "max_position_embeddings": 65536,
15
+ "model_type": "mixtral",
16
+ "num_attention_heads": 48,
17
+ "num_experts_per_tok": 2,
18
+ "num_hidden_layers": 56,
19
+ "num_key_value_heads": 8,
20
+ "num_local_experts": 8,
21
+ "output_router_logits": false,
22
+ "quantization_config": {
23
+ "config_groups": {
24
+ "group_0": {
25
+ "input_activations": {
26
+ "actorder": null,
27
+ "block_structure": null,
28
+ "dynamic": false,
29
+ "group_size": null,
30
+ "num_bits": 8,
31
+ "observer": "minmax",
32
+ "observer_kwargs": {},
33
+ "strategy": "tensor",
34
+ "symmetric": true,
35
+ "type": "float"
36
+ },
37
+ "output_activations": null,
38
+ "targets": [
39
+ "Linear"
40
+ ],
41
+ "weights": {
42
+ "actorder": null,
43
+ "block_structure": null,
44
+ "dynamic": false,
45
+ "group_size": null,
46
+ "num_bits": 8,
47
+ "observer": "minmax",
48
+ "observer_kwargs": {},
49
+ "strategy": "tensor",
50
+ "symmetric": true,
51
+ "type": "float"
52
+ }
53
+ }
54
+ },
55
+ "format": "float-quantized",
56
+ "global_compression_ratio": 1.55840454870176,
57
+ "ignore": [
58
+ "model.layers.0.block_sparse_moe.gate",
59
+ "model.layers.1.block_sparse_moe.gate",
60
+ "model.layers.2.block_sparse_moe.gate",
61
+ "model.layers.3.block_sparse_moe.gate",
62
+ "model.layers.4.block_sparse_moe.gate",
63
+ "model.layers.5.block_sparse_moe.gate",
64
+ "model.layers.6.block_sparse_moe.gate",
65
+ "model.layers.7.block_sparse_moe.gate",
66
+ "model.layers.8.block_sparse_moe.gate",
67
+ "model.layers.9.block_sparse_moe.gate",
68
+ "model.layers.10.block_sparse_moe.gate",
69
+ "model.layers.11.block_sparse_moe.gate",
70
+ "model.layers.12.block_sparse_moe.gate",
71
+ "model.layers.13.block_sparse_moe.gate",
72
+ "model.layers.14.block_sparse_moe.gate",
73
+ "model.layers.15.block_sparse_moe.gate",
74
+ "model.layers.16.block_sparse_moe.gate",
75
+ "model.layers.17.block_sparse_moe.gate",
76
+ "model.layers.18.block_sparse_moe.gate",
77
+ "model.layers.19.block_sparse_moe.gate",
78
+ "model.layers.20.block_sparse_moe.gate",
79
+ "model.layers.21.block_sparse_moe.gate",
80
+ "model.layers.22.block_sparse_moe.gate",
81
+ "model.layers.23.block_sparse_moe.gate",
82
+ "model.layers.24.block_sparse_moe.gate",
83
+ "model.layers.25.block_sparse_moe.gate",
84
+ "model.layers.26.block_sparse_moe.gate",
85
+ "model.layers.27.block_sparse_moe.gate",
86
+ "model.layers.28.block_sparse_moe.gate",
87
+ "model.layers.29.block_sparse_moe.gate",
88
+ "model.layers.30.block_sparse_moe.gate",
89
+ "model.layers.31.block_sparse_moe.gate",
90
+ "model.layers.32.block_sparse_moe.gate",
91
+ "model.layers.33.block_sparse_moe.gate",
92
+ "model.layers.34.block_sparse_moe.gate",
93
+ "model.layers.35.block_sparse_moe.gate",
94
+ "model.layers.36.block_sparse_moe.gate",
95
+ "model.layers.37.block_sparse_moe.gate",
96
+ "model.layers.38.block_sparse_moe.gate",
97
+ "model.layers.39.block_sparse_moe.gate",
98
+ "model.layers.40.block_sparse_moe.gate",
99
+ "model.layers.41.block_sparse_moe.gate",
100
+ "model.layers.42.block_sparse_moe.gate",
101
+ "model.layers.43.block_sparse_moe.gate",
102
+ "model.layers.44.block_sparse_moe.gate",
103
+ "model.layers.45.block_sparse_moe.gate",
104
+ "model.layers.46.block_sparse_moe.gate",
105
+ "model.layers.47.block_sparse_moe.gate",
106
+ "model.layers.48.block_sparse_moe.gate",
107
+ "model.layers.49.block_sparse_moe.gate",
108
+ "model.layers.50.block_sparse_moe.gate",
109
+ "model.layers.51.block_sparse_moe.gate",
110
+ "model.layers.52.block_sparse_moe.gate",
111
+ "model.layers.53.block_sparse_moe.gate",
112
+ "model.layers.54.block_sparse_moe.gate",
113
+ "model.layers.55.block_sparse_moe.gate",
114
+ "lm_head"
115
+ ],
116
+ "kv_cache_scheme": null,
117
+ "quant_method": "compressed-tensors",
118
+ "quantization_status": "compressed"
119
+ },
120
+ "rms_norm_eps": 1e-05,
121
+ "rope_theta": 1000000.0,
122
+ "router_aux_loss_coef": 0.001,
123
+ "router_jitter_noise": 0.0,
124
+ "sliding_window": null,
125
+ "tie_word_embeddings": false,
126
+ "torch_dtype": "bfloat16",
127
+ "transformers_version": "4.49.0",
128
+ "use_cache": true,
129
+ "vocab_size": 32768
130
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.49.0"
6
+ }
model-00001-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c8ffabb3909ea3bbda48822363948cf61088db105cb37ad965e7f32d01d908e
3
+ size 4907576972
model-00002-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b69428ce35d449ab8a3e0d9b6d9f7dac8e075cffb5d30ad9e5f65550fb39f488
3
+ size 4907602332
model-00003-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b73b298e276e0e68149ae74aa6a07c188ed60086d001510618d2e1be0ef5d87e
3
+ size 4907602316
model-00004-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:328ed6703ed4015a27e89986396d73cc4f79a49a6268cc547846ae16736a4950
3
+ size 4907602300
model-00005-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a087474bec9626391b7e39320ff593eefb27b00d22f27762f8d1fb8b81b59970
3
+ size 4907602284
model-00006-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3d020512e7950780ad13683f0bae170504a340c312e7f0de5ecf904409035bb
3
+ size 4907602268
model-00007-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0491fa7de4c27f6e268b6a9a253e5b7846ab2834994caba026a61c1b434d491f
3
+ size 4907602428
model-00008-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d830f82ffed6b13c14780b9c588f3cf95b99375643c75a57eaf21a778a652d3
3
+ size 4907602412
model-00009-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50b2741183d1771ee6354b6855175fde84f168b77e838d83a3de56af62a9d3f6
3
+ size 4907602396
model-00010-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b855ff894ac3db5f453b51c6facc6ac9428c7c00a5a44c3f872583829c229cd8
3
+ size 4907602380
model-00011-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b5437f907ab6bd46adc3330ce01495ff98a5550e339b1b2e89a6331d52f97b0
3
+ size 4907602364
model-00012-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05cd72e98c02f89daa80f5e19c0ceafcab2ee44f2209f2ecde5b7487279def9e
3
+ size 4907602348
model-00013-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02a96a662260bda5f363bfbbe399d0a6ea5fef577e2924ff4887e4669f4b9690
3
+ size 4907602332
model-00014-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:371d13dc775fb5f4411bc0a235fe92925967db40e57abf8df35fb8deeb073378
3
+ size 4907602316
model-00015-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37a35c28a712251a20588dbbd6d4288aee61ffc427bd27e27fa3837bb84c51ac
3
+ size 4907602300
model-00016-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fdcfd911574831baeb90a7d69a70ac8f3bba2fa4f791766556d81a9e124cd4b
3
+ size 4907602284
model-00017-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24b369278f3c28df9558b8255115a30d26cfe76fa6abd3c3e15f2dc94a88a014
3
+ size 4907602268
model-00018-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a20d25709381b455cabd966aa39c8f38a8466f5f29bd495fa7cff8066755ab4a
3
+ size 4907602252
model-00019-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a13ecec78f5c241f58e7c5f6521d5532b11baca58f9bf08d0fc8b5e0b0c41a6
3
+ size 4907602236
model-00020-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c939243565b3bc1edfe7d6e3a90ffe97d81aac76b0fb04c6f8d051eb64929067
3
+ size 4907602220
model-00021-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:469e12ac7c5b793be10db5176c00bb5fe590a250f227bcda0b88203c29feaa47
3
+ size 4970418388
model-00022-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b941997e14b9918d2793cdca2c684006e587071959822c4c8203ccf3b304e3ac
3
+ size 4995682384
model-00023-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3c4fad8d99491f373036f1229b49b01bb4c8fcc66a590915caf42a0e7234fea
3
+ size 4970516900
model-00024-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:225fba4dc5a9dc50c330d7c08c7f0c15c274a62b9fc71e1881348da8cc00224e
3
+ size 4907577508
model-00025-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ffe5c1fb55965b2e07d69ecbf8108cbf84dddaaaee0963e3bea9e270c53f89c
3
+ size 4907602572
model-00026-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94994d0db23b2320cb372cc81e613f06c84dd183def9feb9fbc28a82e1771f59
3
+ size 4907602556
model-00027-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a2d7009697ed53a1dd8a99b41a91f5b37833f90733198389c7acf8849f6821e
3
+ size 4907602540
model-00028-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba5735d07e09f51ce0d7882574371e383ad6e74e381e6d74c328ab1690867f83
3
+ size 4907602524
model-00029-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:205856a91cbd5d411acc49b698a586b7ee19bacb70e9b90e846ba2386f46167b
3
+ size 3410142452
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
recipe.yaml ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ DEFAULT_stage:
2
+ DEFAULT_modifiers:
3
+ QuantizationModifier:
4
+ ignore: [lm_head, 're:.*block_sparse_moe.gate']
5
+ targets: [Linear]
6
+ scheme: FP8
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60c3fc985cbfedcb429d05994efe548bdfecd6a00226fcdc8380c36fd894a3be
3
+ size 3671968
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37f00374dea48658ee8f5d0f21895b9bc55cb0103939607c8185bfd1c6ca1f89
3
+ size 587404
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff