image/png

Xiaolong-Qwen3-1.7B

Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-1.7B-abliterated-TIES.

Finetuning Details

  • Method: ORPO
  • Epochs: 2
  • Learning Rate: 5e-6, cosine decay w/ 5% warmup
  • Batch Size: 2 x 16 (32 effective)
  • Max Grad Norm: 0.3
  • LoRA Rank: 64
  • Hardware: 1x NVIDIA RTX A6000

Dataset Composition

~9,100 samples. 3,000 used Chain of Thought reasoning.

Chain of Thought

Downloads last month
3
Safetensors
Model size
1.72B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nbeerbower/Xiaolong-Qwen3-1.7B

Finetuned
(1)
this model
Quantizations
2 models

Datasets used to train nbeerbower/Xiaolong-Qwen3-1.7B