--- base_model: - deepseek-ai/DeepSeek-Prover-V1.5-SFT datasets: - kfdong/STP_Lean_SFT license: mit library_name: transformers pipeline_tag: text-generation --- This is the final Self-play Theorem Prover model as described in the paper [https://arxiv.org/abs/2502.00212](https://arxiv.org/abs/2502.00212). The training and evalution code is avaliable [here](https://github.com/kfdong/STP/tree/main). ```tex @article{dong2025beyond, title={Beyond Limited Data: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving}, author={Dong, Kefan and Ma, Tengyu}, journal={arXiv preprint arXiv:2502.00212}, year={2025} } ``` ## 1. Evaluation Results The table below compares the pass@3200 performance of STP (our model) and DeepSeek-Prover-V1.5 on miniF2F-test and ProofNet-test.
| | miniF2F-test | ProofNet-test | |--------|------------------|------------------| | **DeepSeek-Prover-V1.5-SFT** | 53.3% ± 0.5% | 21.0% ± 0.9% | | **DeepSeek-Prover-V1.5-RL** | 54.9% ± 0.7% | 22.0% ± 0.5% | | **STP** | **65.0% ± 0.5%** | **23.9% ± 0.6%** |
## 2. Dataset We also release the dataset [here](https://huggingface.co/datasets/kfdong/STP_Lean_0320), which contains: - Extracted examples from mathlib4, - Generated correct proofs of statements in LeanWorkbook, - Generated correct proofs of conjectures proposed by our model during self-play training. Our final model is finetuned from DeepSeek-Prover-V1.5-SFT with this dataset for 1 epoch.