metadata
license: mit
datasets:
- Jiayi-Pan/Countdown-Tasks-3to4
language:
- en
- es
- ar
- la
base_model:
- unsloth/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- unsloth
- llama
- deepseek
- GGUF
- Q4_K_M
Llama3-ThinkQ8
A fine-tuned version of Llama 3 that shows explicit thinking using <think>
and <answer>
tags. This model is quantized to 4-bit (Q4) for efficient inference.
Model Details
- Base Model: Llama 3
- Quantization: 4-bit (Q4)
- Special Feature: Explicit thinking process with tags
How to Use with Ollama
1. Install Ollama
If you haven't already installed Ollama, follow the instructions at ollama.ai.
2. Download the model file
Download the GGUF file from this repository.
3. Create the Ollama model
Create a file named Modelfile
with this content:
FROM llama3-think-Q4_K_M.gguf
# Model parameters
PARAMETER temperature 0.8
PARAMETER top_p 0.9
# System prompt
SYSTEM """You are a helpful assistant. You will check the user request and you will think and generate brainstorming and self-thoughts in your mind and respond only in the following format:
<think> {your thoughts here} </think>
<answer> {your final answer here} </answer>. Use the tags once and place all your output inside them ONLY"""
Then run:
ollama create llama3-think -f Modelfile
4. Run the model
ollama run llama3-think
Example Prompts
Try these examples:
Using each number in this tensor ONLY once (5, 8, 3) and any arithmetic operation like add, subtract, multiply, divide, create an equation that equals 19.
Explain the concept of quantum entanglement to a high school student.