Qwen3-4B-AWQ

zero_piont: true
bits: 4
version: GEMM
dataset: wikitext + Orion-zhen/gsm8k-r1-qwen-32b
num_examples: 256
Downloads last month
0
Safetensors
Model size
875M params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Orion-zhen/Qwen3-4B-AWQ

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(45)
this model

Collection including Orion-zhen/Qwen3-4B-AWQ