A.png

Tureis-Qwen3_QWQ-4B-Exp

Tureis-Qwen3_QWQ-4B-Exp is a fine-tuned variant of the Qwen3-4B architecture, trained specifically on QWQ Synthetic datasets to maximize precise mathematical and logical reasoning. This experimental model offers high accuracy on structured reasoning tasks while maintaining lightweight performance, making it ideal for technical, educational, and symbolic computation applications.

GGUF : https://huggingface.co/prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp-Q4_K_S-GGUF

Key Features

  1. Precision Reasoning with QWQ Dataset Tailored for high-fidelity symbolic reasoning, step-by-step math problem solving, and logic tasks, thanks to specialized QWQ synthetic fine-tuning.

  2. Lightweight Code Understanding Capable of interpreting, generating, and correcting code in Python, C++, and other languages, optimized for concise logic-based tasks.

  3. Structured Output Formatting Generates well-organized responses in Markdown, JSON, LaTeX, and tabular formats suitable for notebooks, documentation, and data-centric workflows.

  4. Instruction-Following Accuracy Tuned to follow multi-step user instructions with consistency across tasks and sessions, improving reliability in educational and factual domains.

  5. Multilingual Capabilities Supports reasoning and generation in more than 20 languages for global accessibility and technical translation use cases.

  6. Efficient 4B Architecture Based on Qwen3-4B, providing an optimal tradeoff between performance and compute requirements—suitable for mid-tier GPUs or scaled inference scenarios.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "If 5(x - 2) = 3x + 4, solve for x step-by-step."

messages = [
    {"role": "system", "content": "You are a precise reasoning assistant trained on QWQ datasets."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Step-by-step math and logic problem solving
  • Code snippet generation and explanation
  • Technical and structured documentation
  • JSON/Markdown/tabular output generation
  • Education tools and auto-tutoring in STEM
  • Multilingual reasoning and Q&A systems

Limitations

  • Limited creativity for fiction or open-domain chat
  • Small context window compared to larger models
  • Sensitive to formatting in complex queries
  • May still produce errors in adversarial reasoning prompts

References

  1. Qwen2.5 Technical Report
  2. YaRN: Context Window Extension for LLMs
Downloads last month
17
Safetensors
Model size
4.02B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(42)
this model
Merges
1 model
Quantizations
3 models

Datasets used to train prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp

Collection including prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp