meetkai/functionary-small-v3.2 GGUF Quantizations ๐
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations ๐
Quantization Type | File | Size |
---|---|---|
IQ4_XS | meetkai-functionary-small-v3.2-IQ4_XS.gguf | 4276.63 MB |
Q2_K | meetkai-functionary-small-v3.2-Q2_K.gguf | 3031.87 MB |
Q3_K_L | meetkai-functionary-small-v3.2-Q3_K_L.gguf | 4121.75 MB |
Q3_K_M | meetkai-functionary-small-v3.2-Q3_K_M.gguf | 3832.75 MB |
Q3_K_S | meetkai-functionary-small-v3.2-Q3_K_S.gguf | 3494.75 MB |
Q4_K_M | meetkai-functionary-small-v3.2-Q4_K_M.gguf | 4692.79 MB |
Q4_K_S | meetkai-functionary-small-v3.2-Q4_K_S.gguf | 4475.29 MB |
Q5_K_M | meetkai-functionary-small-v3.2-Q5_K_M.gguf | 5467.42 MB |
Q5_K_S | meetkai-functionary-small-v3.2-Q5_K_S.gguf | 5339.92 MB |
Q6_K | meetkai-functionary-small-v3.2-Q6_K.gguf | 6290.45 MB |
Q8_0 | meetkai-functionary-small-v3.2-Q8_0.gguf | 8145.13 MB |
โก Powered by Featherless AI
Key Features
- ๐ฅ Instant Hosting - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ Zero Infrastructure - No server setup or maintenance required
- ๐ Vast Compatibility - Support for 2400+ models and counting
- ๐ Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models
- Downloads last month
- 20
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for featherless-ai-quants/meetkai-functionary-small-v3.2-GGUF
Base model
meetkai/functionary-small-v3.2