Implementing a Structured Planning AI Agent with LlamaIndex
Set up the environment
- Skip this step if you have already set up the environment
python -m venv .venv source .venv/bin/activate
Setup LlamaIndex
pip install llama-index
Create a python file
touch worker.py
Or
echo. > worker.py
Open the file in VSCode
code worker.py
Add the needed imports
from llama_index.core.tools import FunctionTool from llama_index.llms.openai import OpenAI from llama_index.core.agent import ( StructuredPlannerAgent, FunctionCallingAgentWorker, )
Define the function
def multiply(a: int, b: int) -> int: """Multiply two integers and returns the result integer""" return a * b
Define and configure the worker agent
multiply_tool = FunctionTool.from_defaults(fn=multiply) llm = OpenAI(model="gpt-4o-mini") worker = FunctionCallingAgentWorker.from_tools([multiply_tool], llm=llm, verbose=True) worker_agent = StructuredPlannerAgent(worker, [multiply_tool], verbose=True)
Test the worker agent
worker_agent.chat("Solve the equation x = 123 * (x + 2y + 3)")
Create .env file & add api key
OPENAI_API_KEY="<your_api_key>"
Run the agent
python worker.py
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support