Distilling LLM Agent into Small Models with Retrieval and Code Tools
Abstract
Agent Distillation transfers reasoning and task-solving capabilities from large language models to smaller models using enhanced prompts and self-consistent actions, matching performance of larger models on various reasoning tasks.
Large language models (LLMs) excel at complex reasoning tasks but remain computationally expensive, limiting their practical deployment. To address this, recent works have focused on distilling reasoning capabilities into smaller language models (sLMs) using chain-of-thought (CoT) traces from teacher LLMs. However, this approach struggles in scenarios requiring rare factual knowledge or precise computation, where sLMs often hallucinate due to limited capability. In this work, we propose Agent Distillation, a framework for transferring not only reasoning capability but full task-solving behavior from LLM-based agents into sLMs with retrieval and code tools. We improve agent distillation along two complementary axes: (1) we introduce a prompting method called first-thought prefix to enhance the quality of teacher-generated trajectories; and (2) we propose a self-consistent action generation for improving test-time robustness of small agents. We evaluate our method on eight reasoning tasks across factual and mathematical domains, covering both in-domain and out-of-domain generalization. Our results show that sLMs as small as 0.5B, 1.5B, 3B parameters can achieve performance competitive with next-tier larger 1.5B, 3B, 7B models fine-tuned using CoT distillation, demonstrating the potential of agent distillation for building practical, tool-using small agents. Our code is available at https://github.com/Nardien/agent-distillation.
Community
Audio overview 😀
Ep 83: Distilling LLM Agent into Small Models with Retrieval and Code Tools
https://youtu.be/D6WYkoSYYUY
I have created a fork to use Cloudflare Wokers AI API to generate synthetic data, using my $5000 credits.
The data is also available.
https://github.com/ThomasVuNguyen/agent-distillation
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Agentic Reasoning and Tool Integration for LLMs via Reinforcement Learning (2025)
- Search and Refine During Think: Autonomous Retrieval-Augmented Reasoning of LLMs (2025)
- DRP: Distilled Reasoning Pruning with Skill-aware Step Decomposition for Efficient Large Reasoning Models (2025)
- Scaling Reasoning can Improve Factuality in Large Language Models (2025)
- Hawkeye:Efficient Reasoning with Model Collaboration (2025)
- Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement Learning (2025)
- Sparks of Tabular Reasoning via Text2SQL Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper