--- license: mit language: - en base_model: meta-llama/Llama-3.2-3B-Instruct tags: - llama - quantized - gguf - tq2_0 pipeline_tag: text-generation model_creator: meta quantization: TQ2_0 --- # LlamaLite-3B-TQ2_0 (GGUF Format) This is a **quantized** version of `meta-llama/Llama-3.2-3B-Instruct`, using **TQ2_0** quantization for optimized performance and reduced size. The model is stored in **GGUF format** for compatibility with `llama.cpp` and other lightweight inference engines. ## Model Details - **Base Model:** [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) - **Quantization Type:** `TQ2_0` - **Model Size:** ~1.52GB - **Format:** GGUF - **Intended Use:** Text Generation, Chatbots, AI Assistants - **License:** MIT ## Download & Usage ### 1️⃣ Install Dependencies ```bash pip install huggingface_hub