Model Card
cold-RL for mathematical reasoning in our MathIF project.
Github Repository: https://github.com/TingchenFu/MathIF
Training Details
We base our experiments on the DeepScaler dataset, which contains approximately 40k math reasoning samples. The training is conducted using 16 NVIDIA H100 GPUs. For reinforcement learning, we adopt the GRPO framework and use verifiable outcome-based rewards. The model is trained with VeRL framework with most hyper-parameters following the default setting.
Evaluation
We use nucleus sampling (T=1.0, p=0.95) with a maximum generation length of 16,384 tokens for decoding and vLLM engine for efficient inference.
Citation
BibTeX:
@article{fu2025scaling,
title={Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models},
author={Fu, Tingchen and Gu, Jiawei and Li, Yafu and Qu, Xiaoye and Cheng, Yu},
journal={arXiv preprint arXiv:2505.14810},
year={2025}
}
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for TingchenFu/coldrl_3k_qwen-2.5-7b_04240151
Base model
Qwen/Qwen2.5-7B