Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Alcoft
/
Qwen3-0.6B-GGUF
like
0
Text Generation
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
0
GGUF
Model size
752M params
Architecture
qwen3
Chat template
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
347 MB
3-bit
Q3_K_S
390 MB
Q3_K_M
414 MB
Q3_K_L
435 MB
4-bit
Q4_K_S
471 MB
Q4_K_M
484 MB
5-bit
Q5_K_S
544 MB
Q5_K_M
551 MB
6-bit
Q6_K
623 MB
8-bit
Q8_0
805 MB
16-bit
BF16
1.51 GB
F16
1.51 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
Alcoft/Qwen3-0.6B-GGUF
Base model
Qwen/Qwen3-0.6B-Base
Finetuned
Qwen/Qwen3-0.6B
Quantized
(
45
)
this model
Collection including
Alcoft/Qwen3-0.6B-GGUF
TAO71-AI Quants: Qwen3
Collection
3 items
โข
Updated
about 12 hours ago