Troubleshooting & FAQ

Troubleshooting

This section provides solutions to common issues you might encounter when using or extending the framework.

1. Agent Configuration Issues

Problem: "Config file not found: [path]"
Solution:
  • Verify that the configuration file (e.g., agent_config.json) exists in the specified location.
  • Check file permissions and ensure the path is correct.
Problem: "Invalid JSON in config file: [error details]"
Solution:
  • Use a JSON validator (such as jsonlint.com) to check your configuration file for syntax errors.
  • Ensure proper formatting (e.g., commas between entries, correct quotation marks).

2. LLM Provider and API Key Errors

Problem: "Error: OpenAI API key not set. Please set the OPENAI_API_KEY environment variable."
Solution:

Set the API key in your environment:

export OPENAI_API_KEY="your_openai_api_key_here"

For Windows, use:

set OPENAI_API_KEY=your_openai_api_key_here
Problem: "Module not installed. Please install it with 'pip install [module]'"
Solution:

Install the missing module using pip, e.g.:

pip install openai

Verify that the module is available in your current Python environment.

3. Tool Loading and Execution Errors

Problem: "Warning: Tool config not found: tools/[tool_name].json"
Solution:
  • Confirm that the tool configuration file exists in the tools/ directory.
  • Verify that the tool name in your configuration file is spelled correctly.
Problem: "Error loading tool function: [error details]"
Solution:
  • Check the function_path in the tool configuration to ensure it points to the correct module and function.
  • Confirm that the corresponding Python file exists and that the function is defined.

4. Memory Issues

Problem: Memory not storing messages as expected
Solution:
  • Verify that memory is enabled in your configuration ("memory": true).
  • For semantic memory, check that dependencies (SentenceTransformers, FAISS) are installed and that the embedding model is loading correctly.

5. RAG (Retrieval-Augmented Generation) Issues

Problem: No documents retrieved or incorrect indexing
Solution:
  • Ensure you have correctly indexed documents using the index_documents method.
  • Check that the documents provided are non-empty and in plain text.
  • Validate that the FAISS index is built correctly and that the embeddings are computed.

Frequently Asked Questions

Q1: How do I switch between basic and semantic memory?

Update your configuration file by adding or modifying the "memory_type" key. For example:

{
  "config": {
    "memory": true,
    "memory_type": "semantic"
  }
}

In the agent initialization code, the memory module is chosen based on this value.

Q2: How can I add new tools to the framework?

Use the ToolManager.create_tool_config method to define a new tool, then implement the tool function in the appropriate module under the tools/ directory. Finally, add the tool's name to your agent configuration file under the "tools" list.

Q3: How do I integrate a new LLM provider?

Implement a new provider class that inherits from LLMProvider and implements the required methods (format_tools, get_response, extract_tool_call). Update the get_llm_provider function in llm_provider.py to include your new provider.

Q4: Can I use this framework for multimodal applications?

Yes. The framework includes a MultimodalProcessor in multimodal_support.py for processing images (and a stub for audio). You can extend this module to integrate additional libraries or services for richer multimodal functionality.

Q5: What should I do if I encounter a type error?

Use the helper functions in type_safety.py to validate inputs and outputs. Review the error message to determine which parameter is causing the issue, and verify that it matches the expected type.

Q6: Where can I find logs or additional debugging information?

The framework prints warnings and error messages to the console. For more detailed logging, consider integrating a logging framework (such as Python's built-in logging module) to capture and persist logs.

Q7: How can I report bugs or contribute to the framework?

If you encounter any bugs or have suggestions for improvements, consider opening an issue on the project's GitHub repository. Contributions via pull requests are also welcome.

Additional Tips

Testing

Before deploying, test each module individually. For example, run simple scripts to verify memory storage, LLM responses, and tool executions.

Documentation

Keep your configuration files and custom modules well-documented, making it easier to onboard new team members or contributors.

Community Support

If you run into issues that aren't covered in this FAQ, consider reaching out to relevant community forums (e.g., GitHub Issues, Stack Overflow) for further assistance.