Xiaolong-Qwen3-1.7B
Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-1.7B-abliterated-TIES.
Finetuning Details
- Method: ORPO
- Epochs: 2
- Learning Rate: 5e-6, cosine decay w/ 5% warmup
- Batch Size: 2 x 16 (32 effective)
- Max Grad Norm: 0.3
- LoRA Rank: 64
- Hardware: 1x NVIDIA RTX A6000
Dataset Composition
~9,100 samples. 3,000 used Chain of Thought reasoning.
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO (1000 samples)
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
Chain of Thought
- GeneralReasoning/GeneralThought-430K (1000 samples)
- nvidia/OpenMathReasoning (1000 samples)
- nvidia/OpenCodeReasoning (1000 samples)
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support