Papers
arxiv:2505.14146

s3: You Don't Need That Much Data to Train a Search Agent via RL

Published on May 20
· Submitted by pat-jj on May 26
Authors:
,
,
,
,

Abstract

A lightweight, model-agnostic framework decouples the retrieval and generation processes in RAG systems, enhancing performance with minimal training data.

AI-generated summary

Retrieval-augmented generation (RAG) systems empower large language models (LLMs) to access external knowledge during inference. Recent advances have enabled LLMs to act as search agents via reinforcement learning (RL), improving information acquisition through multi-turn interactions with retrieval engines. However, existing approaches either optimize retrieval using search-only metrics (e.g., NDCG) that ignore downstream utility or fine-tune the entire LLM to jointly reason and retrieve-entangling retrieval with generation and limiting the real search utility and compatibility with frozen or proprietary models. In this work, we propose s3, a lightweight, model-agnostic framework that decouples the searcher from the generator and trains the searcher using a Gain Beyond RAG reward: the improvement in generation accuracy over naive RAG. s3 requires only 2.4k training samples to outperform baselines trained on over 70x more data, consistently delivering stronger downstream performance across six general QA and five medical QA benchmarks.

Community

Paper author Paper submitter

You Don't Need That Much Data to Train a Search Agent!
A searcher-centric training framework is all you need.

Overall performance:
performance_overview.png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.14146 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.14146 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.14146 in a Space README.md to link it from this page.

Collections including this paper 7