Add llamaindex agent integration (#2831)

* white spaces

* add llamaindex agent wrapper for autogen

* formatting

* formatting fixes

* add support for llamaindex agents

* fix style

* fix style

* delete file

* re-add file

* fixes pre-commit errors

* feat: Add agentchat_group_chat_with_llamaindex_agents notebook

This commit adds the notebook "agentchat_group_chat_with_llamaindex_agents.ipynb" which demonstrates how to integrate Llamaindex agents into Autogen. The notebook includes code for setting up the API endpoint, creating Llamaindex agents, and setting up a group chat.

* Refactor code

* feat: Add test for LLamaIndexConversableAgent

This commit adds a new test file `test_llamaindex_conversable_agent.py` that contains a test case for the `LLamaIndexConversableAgent` class. The test verifies the functionality of group chat with two MultimodalConversable Agents, limiting the chat by the `max_round` parameter. It also checks if the number of rounds does not exceed the maximum specified rounds.

The purpose of this change is to ensure that the `LLamaIndexConversableAgent` behaves as expected and correctly handles group chats with limited rounds.

Note: This commit includes import statements and setup code necessary for running the test case.

* fix formatting

* feat: Add LlamaIndexAgent job to GitHub Actions workflow

This commit adds a new job called "LlamaIndexAgent" to the GitHub Actions workflow. The job runs on multiple operating systems (ubuntu-latest, macos-latest, windows-2019) and uses Python version 3.11. It sets up the Python environment, installs necessary packages and dependencies for LMM, performs coverage testing using pytest, and uploads the coverage report to Codecov.

The commit also includes changes to the test_llamaindex_conversable_agent.py file. It imports os and sys modules, appends a path to sys.path, and adds skip conditions for tests based on certain conditions.

These changes improve the CI/CD pipeline by adding a new job for LlamaIndexAgent and enhancing test conditions in test_llamaindex_conversable_agent.py.

* fix test yaml

* cleanup tests

* fix test run

* formatting

* add test

* fix yaml

* pr feedback

* add documentation to website

* fixed style

* edit to document page

* newline

* make skip reason easier to see

* compose skip reasons

* fix env variable name

* refactor: Update package installation in contrib workflows

- Replaced specific package installations with more general ones
- Updated the installation of llama-index packages and dependencies
- Added new packages for llama-index and Wikipedia tools

* Update dependencies and add new agents for group chat

- Update dependencies to specific versions
- Add new agent `entertainent_specialist` for discovering entertainment opportunities in a location
- Modify the `test_group_chat_with_llama_index_conversable_agent` function to include the new agent in the group chat

* Update pydantic version requirement in setup.py

- Update pydantic version requirement from "pydantic>=1.10,<3,!=2.6.0" to "pydantic>=1.10,<3"
- Remove comment about issue with pydantic 2.6.0

* Refactor ChatMessage instantiation in _extract_message_and_history()

- Refactored the instantiation of ChatMessage in the _extract_message_and_history() function.
- Added an empty dictionary as additional_kwargs to the ChatMessage constructor.

* Refactor test_llamaindex_conversable_agent.py

- Removed unused import and variable
- Updated OpenAI model to gpt-4
- Reduced max_iterations for location_specialist and entertainment_specialist from 30 to 5
- Changed human_input_mode for user_proxy to "NEVER"
- Added assertion for max_rounds in entertainent_assistant

These changes improve code efficiency and ensure proper functionality.

* Remove entertainment_specialist agent and update user_proxy settings in test_llamaindex_conversable_agent.py

- Remove the creation of entertainent_specialist agent
- Update the max_consecutive_auto_reply setting for user_proxy to 10
- Update the default_auto_reply setting for user_proxy to "Thank you. TERMINATE"

* Refactor installation of LlamaIndex packages and dependencies

- Simplify installation commands for LlamaIndex packages
- Remove specific version numbers from pip install commands

* Update test_llamaindex_conversable_agent.py to include verbose output during pytest.

- Add the -v flag to the pytest command in contrib-openai.yml.
- Print a message when skipping the test due to missing dependencies or key.

* Refactor OpenAI workflow, remove LlamaIndexAgent

This commit removes the LlamaIndexAgent from the OpenAI workflow in order to streamline and simplify the code. The LlamaIndexAgent was no longer necessary and its removal improves overall code organization and maintainability.

* feat: Add test for group chat functionality with LLamaIndexConversableAgent

use mock reactagent in test

* Update Dockerfile for devcontainer

- Updated the Dockerfile for the devcontainer environment.
- Added installation of build-essential, npm, git-lfs, and other packages.
- Upgraded pip and installed pydoc-markdown, pyyaml, and colored libraries.

* Update devcontainer.json with new VS Code extensions and terminal settings

- Updated the list of VS Code extensions in devcontainer.json to include "GitHub.copilot"
- Added a new terminal profile for Linux with the path set to "/bin/bash"
- Set the default Linux terminal profile to "bash"

* removeall

* feat: Add Dockerfiles and devcontainer configurations

This commit adds Dockerfiles and devcontainer configurations for different use cases in the `.devcontainer` directory. The following changes were made:

- Added `Dockerfile` for basic setups (`base`)
- Added `Dockerfile` for advanced features (`full`)
- Added `Dockerfile` for AutoGen project developers (`dev`)
- Added `Dockerfile` for AutoGen project developers using Studio (`studio`)
- Updated existing files with necessary dependencies and configurations
- Modified README.md to provide instructions on customizing Dockerfiles and managing the Docker environment

These changes allow users to easily set up their AutoGen development environment using Docker containers.

* delete

* Add authors.yml file with author information

This commit adds the authors.yml file, which contains information about various authors contributing to the project. Each author entry includes their name, title, URL, and image URL. This file will be used to display author information on the website.

* delete

* Add test cases for agent chat functionality

This commit adds new test cases for the agent chat functionality. The test cases include scenarios such as auto feedback from code execution, function calls, currency calculator, async function calls, group chat finite state machine, cost token tracking, and group chat state flow. These test cases cover different versions of Python (3.10, 3.11, and 3.12) and are skipped if OpenAI is not installed or the Python version does not match.

* delete

* feat: Add LLM configuration documentation

This commit adds documentation for configuring an agent's access to LLMs. It includes information on the `llm_config` argument, `config_list`, and other configuration parameters. The commit also provides examples of filtering the `config_list` based on model names and tags. Additionally, it demonstrates how to add an HTTP client in `llm_config` for proxy usage. Finally, it mentions helper functions for loading a config list from API keys, environment variables, files, or `.env` files.

Closes #1234

* delete

* feat: Add LLM configuration documentation

This commit adds documentation for configuring an agent's access to LLMs. It includes information on the `llm_config` argument, `config_list`, and other configuration parameters. The commit also provides examples of filtering the `config_list` based on model names and tags. Additionally, it demonstrates how to add an HTTP client in `llm_config` for proxy usage. Finally, it mentions helper functions for loading a config list from various sources.

Closes #1234

* delete

* adding back notebooks

* reset

* feat: Add setup.py for package installation

This commit adds a new file, `setup.py`, which is used for installing the package. The `setup.py` file includes information such as the author, description, and dependencies of the package. This allows users to easily install and use the package in their projects.

The `setup.py` file also includes different extra requirements for specific functionalities, such as retrieving chat data or running Jupyter notebooks. These extra requirements are installed when specified during installation.

Overall, this addition improves the usability and installation process of the package.
This commit is contained in:
Diego Colombo 2024-05-31 13:58:15 -07:00 committed by GitHub
parent 341a21787d
commit 6e8331b754
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 477 additions and 0 deletions

View File

@ -444,3 +444,34 @@ jobs:
with:
file: ./coverage.xml
flags: unittest
LlamaIndexAgent:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-2019]
python-version: ["3.11"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies for all tests
run: |
python -m pip install --upgrade pip wheel
pip install pytest-cov>=5
- name: Install packages and dependencies for LlamaIndexConverableAgent
run: |
pip install -e .
pip install llama-index
pip install llama-index-llms-openai
- name: Coverage
run: |
pytest test/agentchat/contrib/test_llamaindex_conversable_agent.py --skip-openai
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests

View File

@ -0,0 +1,109 @@
from typing import Dict, List, Optional, Tuple, Union
from autogen import OpenAIWrapper
from autogen.agentchat import Agent, ConversableAgent
from autogen.agentchat.contrib.vectordb.utils import get_logger
logger = get_logger(__name__)
try:
from llama_index.core.agent.runner.base import AgentRunner
from llama_index.core.chat_engine.types import AgentChatResponse
from llama_index_client import ChatMessage
except ImportError as e:
logger.fatal("Failed to import llama-index. Try running 'pip install llama-index'")
raise e
class LLamaIndexConversableAgent(ConversableAgent):
def __init__(
self,
name: str,
llama_index_agent: AgentRunner,
description: Optional[str] = None,
**kwargs,
):
"""
Args:
name (str): agent name.
llama_index_agent (AgentRunner): llama index agent.
Please override this attribute if you want to reprogram the agent.
description (str): a short description of the agent. This description is used by other agents
(e.g. the GroupChatManager) to decide when to call upon this agent.
**kwargs (dict): Please refer to other kwargs in
[ConversableAgent](../conversable_agent#__init__).
"""
if llama_index_agent is None:
raise ValueError("llama_index_agent must be provided")
if description is None or description.isspace():
raise ValueError("description must be provided")
super().__init__(
name,
description=description,
**kwargs,
)
self._llama_index_agent = llama_index_agent
# Override the `generate_oai_reply`
self.replace_reply_func(ConversableAgent.generate_oai_reply, LLamaIndexConversableAgent._generate_oai_reply)
self.replace_reply_func(ConversableAgent.a_generate_oai_reply, LLamaIndexConversableAgent._a_generate_oai_reply)
def _generate_oai_reply(
self,
messages: Optional[List[Dict]] = None,
sender: Optional[Agent] = None,
config: Optional[OpenAIWrapper] = None,
) -> Tuple[bool, Union[str, Dict, None]]:
"""Generate a reply using autogen.oai."""
user_message, history = self._extract_message_and_history(messages=messages, sender=sender)
chatResponse: AgentChatResponse = self._llama_index_agent.chat(message=user_message, chat_history=history)
extracted_response = chatResponse.response
return (True, extracted_response)
async def _a_generate_oai_reply(
self,
messages: Optional[List[Dict]] = None,
sender: Optional[Agent] = None,
config: Optional[OpenAIWrapper] = None,
) -> Tuple[bool, Union[str, Dict, None]]:
"""Generate a reply using autogen.oai."""
user_message, history = self._extract_message_and_history(messages=messages, sender=sender)
chatResponse: AgentChatResponse = await self._llama_index_agent.achat(
message=user_message, chat_history=history
)
extracted_response = chatResponse.response
return (True, extracted_response)
def _extract_message_and_history(
self, messages: Optional[List[Dict]] = None, sender: Optional[Agent] = None
) -> Tuple[str, List[ChatMessage]]:
"""Extract the message and history from the messages."""
if not messages:
messages = self._oai_messages[sender]
if not messages:
return "", []
message = messages[-1].get("content", "")
history = messages[:-1]
history_messages: List[ChatMessage] = []
for history_message in history:
content = history_message.get("content", "")
role = history_message.get("role", "user")
if role:
if role == "user" or role == "assistant":
history_messages.append(ChatMessage(content=content, role=role, additional_kwargs={}))
return message, history_messages

View File

@ -0,0 +1,225 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9a71fa36",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"# Groupchat with Llamaindex agents\n",
"\n",
"[Llamaindex agents](https://docs.llamaindex.ai/en/stable/optimizing/agentic_strategies/agentic_strategies/) have the ability to use planning strategies to answer user questions. They can be integrated in Autogen in easy ways\n",
"\n",
"## Requirements"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c528cd6d",
"metadata": {},
"outputs": [],
"source": [
"! pip install pyautogen\n",
"! pip install llama-index\n",
"! pip install llama-index-tools-wikipedia\n",
"! pip install llama-index-readers-wikipedia\n",
"! pip install wikipedia"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5ebd2397",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"## Set your API Endpoint"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dca301a4",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"import autogen\n",
"\n",
"config_list = [{\"model\": \"gpt-3.5-turbo-0125\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")}]"
]
},
{
"cell_type": "markdown",
"id": "76c11ea8",
"metadata": {},
"source": [
"## Set Llamaindex"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2d3d298e",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core import Settings\n",
"from llama_index.core.agent import ReActAgent\n",
"from llama_index.embeddings.openai import OpenAIEmbedding\n",
"from llama_index.llms.openai import OpenAI\n",
"from llama_index.tools.wikipedia import WikipediaToolSpec\n",
"\n",
"llm = OpenAI(\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" temperature=0.0,\n",
" api_key=os.environ.get(\"OPENAPI_API_KEY\", \"\"),\n",
")\n",
"\n",
"embed_model = OpenAIEmbedding(\n",
" model=\"text-embedding-ada-002\",\n",
" temperature=0.0,\n",
" api_key=os.environ.get(\"OPENAPI_API_KEY\", \"\"),\n",
")\n",
"\n",
"Settings.llm = llm\n",
"Settings.embed_model = embed_model\n",
"\n",
"# create a react agent to use wikipedia tool\n",
"wiki_spec = WikipediaToolSpec()\n",
"# Get the search wikipedia tool\n",
"wikipedia_tool = wiki_spec.to_tool_list()[1]\n",
"\n",
"location_specialist = ReActAgent.from_tools(tools=[wikipedia_tool], llm=llm, max_iterations=10, verbose=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2b9526e7",
"metadata": {},
"source": [
"## Create agents\n",
"\n",
"In this example, we will create a Llamaindex agent to answer questions fecting data from wikipedia and a user proxy agent."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a10c9fe-1fbc-40c6-b655-5d2256864ce8",
"metadata": {},
"outputs": [],
"source": [
"from llamaindex_conversable_agent import LLamaIndexConversableAgent\n",
"\n",
"llm_config = {\n",
" \"temperature\": 0,\n",
" \"config_list\": config_list,\n",
"}\n",
"\n",
"trip_assistant = LLamaIndexConversableAgent(\n",
" \"trip_specialist\",\n",
" llama_index_agent=location_specialist,\n",
" system_message=\"You help customers finding more about places they would like to visit. You can use external resources to provide more details as you engage with the customer.\",\n",
" description=\"This agents helps customers discover locations to visit, things to do, and other details about a location. It can use external resources to provide more details. This agent helps in finding attractions, history and all that there si to know about a place\",\n",
")\n",
"\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"Admin\",\n",
" human_input_mode=\"ALWAYS\",\n",
" code_execution_config=False,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "966c96a4-cc8a-4400-b8db-a21b7142e33c",
"metadata": {},
"source": [
"Next, let's set up our group chat."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "354b4a8f-7a96-455b-9f17-cbc19d880462",
"metadata": {},
"outputs": [],
"source": [
"groupchat = autogen.GroupChat(\n",
" agents=[trip_assistant, user_proxy],\n",
" messages=[],\n",
" max_round=500,\n",
" speaker_selection_method=\"round_robin\",\n",
" enable_clear_history=True,\n",
")\n",
"manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5518947",
"metadata": {},
"outputs": [],
"source": [
"chat_result = user_proxy.initiate_chat(\n",
" manager,\n",
" message=\"\"\"\n",
"What can i find in Tokyo related to Hayao Miyazaki and its moveis like Spirited Away?.\n",
"\"\"\",\n",
")"
]
}
],
"metadata": {
"front_matter": {
"description": "Integrate llamaindex agents with Autogen.",
"tags": [
"react",
"llama index",
"software engineering"
]
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,102 @@
#!/usr/bin/env python3 -m pytest
import os
import sys
import unittest
from unittest.mock import MagicMock, patch
import pytest
from conftest import MOCK_OPEN_AI_API_KEY
from autogen import GroupChat, GroupChatManager
from autogen.agentchat.contrib.llamaindex_conversable_agent import LLamaIndexConversableAgent
from autogen.agentchat.conversable_agent import ConversableAgent
sys.path.append(os.path.join(os.path.dirname(__file__), "../.."))
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from conftest import reason, skip_openai
skip_reasons = [reason]
try:
from llama_index.core.agent import ReActAgent
from llama_index.core.chat_engine.types import AgentChatResponse
from llama_index.llms.openai import OpenAI
skip_for_dependencies = False
skip_reason = ""
except ImportError as e:
skip_for_dependencies = True
skip_reason = f"dependency not installed: {e.msg}"
pass
openaiKey = MOCK_OPEN_AI_API_KEY
@pytest.mark.skipif(skip_for_dependencies, reason=skip_reason)
@patch("llama_index.core.agent.ReActAgent.chat")
def test_group_chat_with_llama_index_conversable_agent(chat_mock: MagicMock) -> None:
"""
Tests the group chat functionality with two MultimodalConversable Agents.
Verifies that the chat is correctly limited by the max_round parameter.
Each agent is set to describe an image in a unique style, but the chat should not exceed the specified max_rounds.
"""
llm = OpenAI(
model="gpt-4",
temperature=0.0,
api_key=openaiKey,
)
chat_mock.return_value = AgentChatResponse(
response="Visit ghibli studio in Tokyo, Japan. It is a must-visit place for fans of Hayao Miyazaki and his movies like Spirited Away."
)
location_specialist = ReActAgent.from_tools(llm=llm, max_iterations=5)
# create an autogen agent using the react agent
trip_assistant = LLamaIndexConversableAgent(
"trip_specialist",
llama_index_agent=location_specialist,
system_message="You help customers finding more about places they would like to visit. You can use external resources to provide more details as you engage with the customer.",
description="This agents helps customers discover locations to visit, things to do, and other details about a location. It can use external resources to provide more details. This agent helps in finding attractions, history and all that there si to know about a place",
)
llm_config = False
max_round = 5
user_proxy = ConversableAgent(
"customer",
max_consecutive_auto_reply=10,
human_input_mode="NEVER",
llm_config=False,
default_auto_reply="Thank you. TERMINATE",
)
group_chat = GroupChat(
agents=[user_proxy, trip_assistant],
messages=[],
max_round=100,
send_introductions=False,
speaker_selection_method="round_robin",
)
group_chat_manager = GroupChatManager(
groupchat=group_chat,
llm_config=llm_config,
is_termination_msg=lambda x: x.get("content", "").rstrip().find("TERMINATE") >= 0,
)
# Initiating the group chat and observing the number of rounds
user_proxy.initiate_chat(
group_chat_manager,
message="What can i find in Tokyo related to Hayao Miyazaki and its moveis like Spirited Away?.",
)
# Assertions to check if the number of rounds does not exceed max_round
assert all(len(arr) <= max_round for arr in trip_assistant._oai_messages.values()), "Agent 1 exceeded max rounds"
assert all(len(arr) <= max_round for arr in user_proxy._oai_messages.values()), "User proxy exceeded max rounds"
if __name__ == "__main__":
"""Runs this file's tests from the command line."""
test_group_chat_with_llama_index_conversable_agent()

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:08965da22eddf8c9253e96c1424d9a9b67f210018a3e39372746b93f1d787c04
size 23548

View File

@ -0,0 +1,7 @@
# Llamaindex
![Llamaindex Example](img/ecosystem-llamaindex.png)
[Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns.
- [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb)