Upload README.md
Browse files
README.md
CHANGED
@@ -16,8 +16,8 @@ tags:
|
|
16 |
- safetensors
|
17 |
- onnx
|
18 |
- transformers.js
|
19 |
-
model_name: SmolLM2
|
20 |
-
base_model: HuggingFaceTB/SmolLM2-
|
21 |
inference: false
|
22 |
model_creator: HuggingFaceTB
|
23 |
pipeline_tag: text-generation
|
@@ -28,7 +28,7 @@ quantized_by: fbaldassarri
|
|
28 |
|
29 |
## Model Information
|
30 |
|
31 |
-
Quantized version of [HuggingFaceTB/SmolLM2-
|
32 |
- 8 bits (INT8)
|
33 |
- group size = 128
|
34 |
- Asymmetrical Quantization
|
@@ -36,7 +36,7 @@ Quantized version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.c
|
|
36 |
|
37 |
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.5
|
38 |
|
39 |
-
Note: this INT8 version of SmolLM2-
|
40 |
|
41 |
## Replication Recipe
|
42 |
|
@@ -61,14 +61,14 @@ pip install -vvv --no-build-isolation -e .[cpu]
|
|
61 |
|
62 |
```
|
63 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
64 |
-
model_name = "HuggingFaceTB/SmolLM2-
|
65 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
66 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
67 |
from auto_round import AutoRound
|
68 |
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
|
69 |
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
|
70 |
autoround.quantize()
|
71 |
-
output_dir = "./AutoRound/HuggingFaceTB_SmolLM2-
|
72 |
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
|
73 |
```
|
74 |
|
|
|
16 |
- safetensors
|
17 |
- onnx
|
18 |
- transformers.js
|
19 |
+
model_name: SmolLM2 135M Instruct
|
20 |
+
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
|
21 |
inference: false
|
22 |
model_creator: HuggingFaceTB
|
23 |
pipeline_tag: text-generation
|
|
|
28 |
|
29 |
## Model Information
|
30 |
|
31 |
+
Quantized version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) using torch.float32 for quantization tuning.
|
32 |
- 8 bits (INT8)
|
33 |
- group size = 128
|
34 |
- Asymmetrical Quantization
|
|
|
36 |
|
37 |
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.5
|
38 |
|
39 |
+
Note: this INT8 version of SmolLM2-135M-Instruct has been quantized to run inference through CPU.
|
40 |
|
41 |
## Replication Recipe
|
42 |
|
|
|
61 |
|
62 |
```
|
63 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
64 |
+
model_name = "HuggingFaceTB/SmolLM2-135M-Instruct"
|
65 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
66 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
67 |
from auto_round import AutoRound
|
68 |
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
|
69 |
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
|
70 |
autoround.quantize()
|
71 |
+
output_dir = "./AutoRound/HuggingFaceTB_SmolLM2-135M-Instruct-auto_gptq-int8-gs128-asym"
|
72 |
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
|
73 |
```
|
74 |
|