This is Qwen/Qwen3-1.7B quantized with AutoRound (symmetric quantization) and serialized with the GPTQ format in 8-bit.

Downloads last month
0
Safetensors
Model size
678M params
Tensor type
I32
BF16
FP16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support