Core Components

This section details the primary building blocks of the framework, describing their responsibilities and how they interact. The core components include memory management, LLM provider integration, and tool management, all of which are orchestrated by the central Agent.

1. Memory Management

Memory management is crucial for maintaining conversation context. The framework provides two levels of memory:

Basic Memory

Location: memory.py

Purpose: Stores a limited number of recent messages (user, assistant, or system) for context during interactions.

Key Features:

  • Message Storage: Keeps messages as dictionaries with role and content.
  • Size Limitation: Only retains the most recent messages based on a configurable maximum (default is 10).
  • Toggle Capability: Memory can be enabled or disabled as needed.

Example Usage:

from memory import Memory

# Create a memory instance with a max of 10 messages, enabled
mem = Memory(max_items=10, enabled=True)
mem.add("user", "Hello, how are you?")
messages = mem.get_messages()
print(messages)

Semantic Memory

Location: semantic_memory.py

Purpose: Enhances basic memory by using embeddings to index and retrieve semantically similar messages. It leverages open source libraries (SentenceTransformers and FAISS) to enable similarity search.

Key Features:

  • Embedding Generation: Computes a vector representation for each message.
  • Similarity Search: Uses FAISS to quickly retrieve messages similar to a given query.
  • Dynamic Index Update: Maintains a FAISS index that is rebuilt when the message history exceeds the defined limit.

Example Usage:

from semantic_memory import SemanticMemory

# Create a semantic memory instance
sem_mem = SemanticMemory(max_items=10, enabled=True)
sem_mem.add("user", "Tell me a joke")
similar = sem_mem.retrieve_similar("I want to hear something funny", top_k=1)
print("Semantically similar messages:", similar)

2. LLM Provider Integration

Location: llm_provider.py

Purpose: Provides a unified interface to interact with different LLM providers. The base class defines how to format tools, retrieve responses, and extract tool calls from the LLM's output.

Components:

  • LLMProvider (Abstract Base Class): Declares methods like format_tools, get_response, and extract_tool_call.
  • OpenAIProvider & AnthropicProvider: Concrete implementations that interface with OpenAI and Anthropic LLMs, respectively.

Key Features:

  • Tool Formatting: Converts tool configurations into the appropriate format for the LLM.
  • Response Handling: Processes the response from the LLM, including checking for JSON formatted tool calls.

Example Usage:

from llm_provider import get_llm_provider

# Initialize an LLM provider (default: OpenAI)
llm = get_llm_provider("openai")

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the weather like today?"}
]

# Get a response from the LLM
response = llm.get_response(messages, tools=None)
print("LLM Response:", response)

3. Tool Management

Location: tool.py

Purpose: Provides utilities for creating, registering, and loading tools (e.g., a calculator) into the framework.

Key Features:

  • Tool Configuration Creation: ToolManager.create_tool_config allows you to define a tool's name, description, function path, and parameters.
  • Basic Tool Registration: ToolManager.register_basic_tools demonstrates how to register a simple calculator tool.
  • Agent Configuration Setup: ToolManager.setup_basic_config registers basic tools and generates an initial agent configuration file.

Example Usage:

from tool import ToolManager

# Register the basic tools, such as a calculator
ToolManager.setup_basic_config()

4. Agent Core

Location: agent.py

Purpose: Orchestrates the entire workflow—from reading configuration and loading tools, to processing queries using the LLM and executing any required tools.

Workflow:

  • Configuration Loading: Reads settings from a JSON file.
  • Tool Loading: Loads tool definitions based on configuration.
  • Memory Integration: Appends conversation history to the query context.
  • LLM Processing: Sends the prompt to the LLM and handles responses, including tool calls.
  • Tool Execution: Dynamically imports and executes tool functions if a tool call is detected.

Example Usage:

from agent import create_agent, run_agent

# Create an agent using the configuration file and specified LLM provider
agent = create_agent("agent_config.json", llm_provider="openai")

# Process a user query
response = run_agent(agent, "Calculate 3 * 5")
print("Agent Response:", response)

Summary

  • Memory Management ensures the agent maintains context with both basic and semantic capabilities.
  • LLM Provider Integration abstracts communication with various LLMs, ensuring flexibility and consistency.
  • Tool Management allows for dynamic extension of the agent's functionality via external tools.
  • Agent Core brings these components together to create a powerful and extendable conversational agent.

These core components form the backbone of the framework, enabling developers to build, customize, and extend the system for a wide range of applications.