Quantized with AutoAWQ v0.2.8 using default settings

    quant_config = {
        "zero_point": True,
        "q_group_size": 128,
        "w_bit": 4,
        "version": "GEMM"
    }
Downloads last month
86
Safetensors
Model size
5.73B params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for CAPsMANyo/DeepSeekR1-Qwen2.5-Coder-32B-Preview-AWQ

Quantized
(16)
this model