Papers
arxiv:2505.20325

Guided by Gut: Efficient Test-Time Scaling with Reinforced Intrinsic Confidence

Published on May 23
· Submitted by AmirhoseinGH on May 28
Authors:
,

Abstract

The Guided by Gut (GG) framework enhances LLM reasoning efficiently using intrinsic signals and token-level confidence, outperforming PRM-based methods with faster inference and lower memory usage.

AI-generated summary

Test-Time Scaling (TTS) methods for enhancing Large Language Model (LLM) reasoning often incur substantial computational costs, primarily due to extensive reliance on external Process Reward Models (PRMs) or sampling methods like Best-of-N (BoN). This paper introduces Guided by Gut (GG), an efficient self-guided TTS framework that achieves PRM-level performance without costly external verifier models. Our method employs a lightweight tree search guided solely by intrinsic LLM signals, token-level confidence and step novelty. One critical innovation is improving the reliability of internal confidence estimates via a targeted reinforcement learning fine-tuning phase. Empirical evaluations on challenging mathematical reasoning benchmarks demonstrate that GG enables smaller models (e.g., 1.5B parameters) to achieve accuracy matching or surpassing significantly larger models (e.g., 32B-70B parameters), while reducing GPU memory usage by up to 10x. Compared to PRM-based methods, GG achieves comparable accuracy with 8x faster inference speeds and 4-5x lower memory usage. Additionally, GG reduces KV cache memory usage by approximately 50% compared to the BoN strategy, facilitating more efficient and practical deployment of TTS techniques.

Community

Paper author Paper submitter

TL;DR: "Guided by Gut (GG)" is an efficient, PRM-free search method that boosts small LLMs (1.5B) to outperform much larger models (32B–70B). Leveraging GRPO-based reinforcement learning to calibrate internal confidence, GG enables efficient, fast, and better reasoning without costly external verifiers.📄✨

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.20325 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.20325 in a Space README.md to link it from this page.

Collections including this paper 1