This section walks you through the initial steps to set up and run your agent. It covers creating an agent configuration, registering basic tools, and finally creating and running the agent.
Getting Started
Start by creating a JSON file (for example, agent_config.json
) to define your agent's properties, such as its name, backstory, task, tools to use, and memory settings.
{
"agent_name": "MathAssistant",
"config": {
"backstory": "I am a helpful assistant with math skills.",
"task": "help users solve mathematical problems using my calculator tool when needed",
"tools": ["calculator"],
"memory": true,
"prompt_template": ""
}
}
- agent_name:
- The name of your agent.
- backstory:
- A brief description of your agent.
- task:
- The main function or purpose of the agent.
- tools:
- A list of tools (by name) that the agent can use.
- memory:
- Boolean value to enable or disable conversation memory.
- prompt_template (optional):
- Custom prompt format if you want to override the default prompt.
Before running the agent, register the basic tools using the provided utility in tool.py
. For example, you can register a simple calculator tool.
from tool import ToolManager
# Register basic tools (e.g., a calculator) and set up the agent configuration.
ToolManager.setup_basic_config()
This command will:
- Create a tool configuration file (e.g.,
tools/calculator.json
). - Set up the necessary directory structure and Python modules for the tool.
- Generate an agent configuration file (
agent_config.json
) if it doesn't already exist.
With the configuration and tools in place, you can now create and run the agent. Use the helper functions provided in agent.py
.
from agent import create_agent, run_agent
# Create an agent by providing the path to your configuration file
agent = create_agent("agent_config.json", llm_provider="openai")
# Run the agent with a query and print the response
response = run_agent(agent, "What is 2 + 2?")
print("Agent Response:", response)
What Happens Under the Hood:
- Configuration Loading: The agent reads the configuration from
agent_config.json
and sets up properties like the agent name, tools, and memory settings. - Tool Loading: It loads tool configurations from the
tools/
directory. For example, the calculator tool is loaded and becomes available for use. - Memory Initialization: A memory instance is created (basic or semantic based on further customization) to store conversation history.
- LLM Provider Integration: The agent uses the specified LLM provider (e.g., OpenAI) to process queries. It formats the message prompt with system instructions and conversation context.
- Query Processing: When a query is processed:
- The agent constructs a complete prompt including system instructions and past conversation (if memory is enabled).
- The query is sent to the LLM.
- If the LLM output includes a tool call (in JSON format), the specified tool is executed (e.g., the calculator evaluates the expression).
- The tool's result is fed back into the conversation, and the final response is generated.
Summary
By following these steps, you set up the basic environment for your agent:
- Create a configuration file that defines your agent's behavior.
- Register the required tools.
- Create and run the agent to process queries.
This modular approach lets you expand the framework further with advanced features, such as semantic memory, RAG, and multimodal support.
Feel free to ask if you have questions or need further assistance with any part of the setup!