Model Information
This model is the reasoning model for Text2SQL task introduced in Think2SQL: Reinforce LLM Reasoning Capabilities for Text2SQL
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-7B-Instruct on the simone-papicchio/bird dataset. It has been trained using TRL.
Quick start
The best model performance are given with its System and User prompt. The model is intended to use with three input: question, evidence and the database schema.
Starting with transformers >= 4.43.0
onward, you can run conversational inference using the Transformers pipeline
abstraction or by leveraging the Auto classes with the generate()
function.
Make sure to update your transformers installation via pip install --upgrade transformers
.
import transformers
import torch
model_id = "simone-papicchio/Think2SQL-7B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
system_message = (
"You are a helpful AI Assistant that provides well-reasoned and detailed responses. "
"You first think about the reasoning process as an internal monologue and then provide the user with the answer. "
"Respond in the following format: <think>\n...\n</think>\n<answer>\n...\n</answer>"
).strip()
user_message = (
"Answer the following question with the SQL code. Use the piece of evidence and base your answer on the database schema. "
"Given the question, the evidence and the database schema, return in the <answer> tags only the SQL script that addresses the question.\n"
"Question:\n{question}\n\n"
"Evidence:\n{evidence}\n\n"
"Database Schema:\n{schema}\n\n"
"Return only the SQL script enclosed in <answer> tags."
).strip()
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message},
]
outputs = pipeline(
messages,
max_new_tokens=30_000,
temperature=0.7,
top_p=0.95
)
print(outputs[0]["generated_text"][-1])
Training procedure
This model was trained with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.
Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citations
@misc{papicchio2025think2sqlreinforcellmreasoning,
title={Think2SQL: Reinforce LLM Reasoning Capabilities for Text2SQL},
author={Simone Papicchio and Simone Rossi and Luca Cagliero and Paolo Papotti},
year={2025},
eprint={2504.15077},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.15077},
}
@inproceedings{papicchio2023qatch,
title={QATCH: benchmarking SQL-centric tasks with table representation learning models on your data},
author={Papicchio, Simone and Papotti, Paolo and Cagliero, Luca},
booktitle={Proceedings of the 37th International Conference on Neural Information Processing Systems},
pages={30898--30917},
year={2023}
}
- Downloads last month
- 9