Automated benchmark (due to time constrains)

This benchmark compares the Qwen2.5-Coder-7B-Instruct model with this servicenow finetune, Qwen QwQ 32B and Quasar-Alpha (Secret new model on Openrouter, revealed as a Pre-Release of GPT 4.1, coding comparable a bit better than DeepSeek V3, https://openrouter.ai/openrouter/quasar-alpha, https://openrouter.ai/openai/gpt-4.1). DeepSeek R1 evaluated the results of each benchmark question.

Please note: This process definitly needs some improvements, for a general overview it should be good enough tho

Results were okay but not as good as i wanted, definitly taking another look at the training data and different approaches

Downloads last month
157
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for henrik3/Qwen2.5-Coder-7B-Instruct-ServiceNow-v0.1

Base model

Qwen/Qwen2.5-7B
Quantized
(135)
this model