Add Tutorial for AgentChat Docs (#3849)

This commit is contained in:
Victor Dibia 2024-10-21 05:45:44 -07:00 committed by GitHub
parent 4ff062d5b3
commit f747b3c884
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 1160 additions and 699 deletions

View File

@ -1,22 +0,0 @@
---
myst:
html_meta:
"description lang=en": |
User Guide for AutoGen AgentChat, a framework for building multi-agent applications with AI agents.
---
# Guides
```{warning}
🚧 Under construction 🚧
```
```{toctree}
:maxdepth: 1
:hidden:
tool_use
code-execution
selector-group-chat
```

View File

@ -1,185 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tool Use\n",
"\n",
"The `AgentChat` api provides a `ToolUseAssistantAgent` with presets for adding tools that the agent can call as part of it's response. \n",
"\n",
":::{note}\n",
"\n",
"The example presented here is a work in progress 🚧. Also, tool uses here assumed the `model_client` used by the agent supports tool calling. \n",
"::: "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import ToolUseAssistantAgent\n",
"from autogen_agentchat.teams import RoundRobinGroupChat, StopMessageTermination\n",
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
"from autogen_core.components.tools import FunctionTool"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In AgentChat, a Tool is a function wrapped in the `FunctionTool` class exported from `autogen_core.components.tools`. "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 72 degrees and Sunny.\"\n",
"\n",
"\n",
"get_weather_tool = FunctionTool(get_weather, description=\"Get the weather for a city\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, agents that use tools are defined in the following manner. \n",
"\n",
"- An agent is instantiated based on the `ToolUseAssistantAgent` class in AgentChat. The agent is aware of the tools it can use by passing a `tools_schema` attribute to the class, which is passed to the `model_client` when the agent generates a response.\n",
"- An agent Team is defined that takes a list of `tools`. Effectively, the `ToolUseAssistantAgent` can generate messages that call tools, and the team is responsible executing those tool calls and returning the results."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-08T20:34:31.935149]:\u001b[0m\n",
"\n",
"What's the weather in New York?"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-08T20:34:33.080494], Weather_Assistant:\u001b[0m\n",
"\n",
"The weather in New York is 72 degrees and sunny. \n",
"\n",
"TERMINATE"
]
}
],
"source": [
"assistant = ToolUseAssistantAgent(\n",
" \"Weather_Assistant\",\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
" registered_tools=[get_weather_tool],\n",
")\n",
"team = RoundRobinGroupChat([assistant])\n",
"result = await team.run(\"What's the weather in New York?\", termination_condition=StopMessageTermination())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using Langchain Tools \n",
"\n",
"AutoGen also provides direct support for tools from LangChain via the `autogen_ext` package.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# pip install langchain, langchain-community, wikipedia , autogen-ext\n",
"\n",
"import wikipedia\n",
"from autogen_ext.tools.langchain import LangChainToolAdapter\n",
"from langchain.tools import WikipediaQueryRun\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"\n",
"api_wrapper = WikipediaAPIWrapper(wiki_client=wikipedia, top_k_results=1, doc_content_chars_max=100)\n",
"tool = WikipediaQueryRun(api_wrapper=api_wrapper)\n",
"\n",
"langchain_wikipedia_tool = LangChainToolAdapter(tool)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-08T20:44:08.218758]:\u001b[0m\n",
"\n",
"Who was the first president of the United States?\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-08T20:44:11.240067], WikiPedia_Assistant:\u001b[0m\n",
"\n",
"The first president of the United States was George Washington, who served from April 30, 1789, to March 4, 1797. \n",
"\n",
"TERMINATE"
]
}
],
"source": [
"wikipedia_assistant = ToolUseAssistantAgent(\n",
" \"WikiPedia_Assistant\",\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
" registered_tools=[langchain_wikipedia_tool],\n",
")\n",
"team = RoundRobinGroupChat([wikipedia_assistant])\n",
"result = await team.run(\n",
" \"Who was the first president of the United States?\", termination_condition=StopMessageTermination()\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -11,31 +11,49 @@ AgentChat is a high-level package for building multi-agent applications built on
AgentChat aims to provide intuitive defaults, such as **Agents** with preset behaviors and **Teams** with predefined communication protocols, to simplify building multi-agent applications.
```{include} warning.md
```
```{tip}
If you are interested in implementing complex agent interaction behaviours, defining custom messaging protocols, or orchestration mechanisms, consider using the [ `autogen-core`](../core-user-guide/index.md) package.
```
## Agents
::::{grid} 2 2 2 2
:gutter: 3
Agents provide presets for how an agent might respond to received messages. The following Agents are currently supported:
:::{grid-item-card} {fas}`download;pst-color-primary` Installation
:link: ./installation.html
- `CodingAssistantAgent` - Generates responses using an LLM on receipt of a message
- `CodeExecutionAgent` - Extracts and executes code snippets found in received messages and returns the output
- `ToolUseAssistantAgent` - Responds with tool call messages based on received messages and a list of tool schemas provided at initialization
How to install AgentChat
:::
## Teams
:::{grid-item-card} {fas}`rocket;pst-color-primary` Quickstart
:link: ./quickstart.html
Teams define how groups of agents communicate to address tasks. The following Teams are currently supported:
Build your first agent
:::
- `RoundRobinGroupChat` - A team where agents take turns sending messages (in a round robin fashion) until a termination condition is met
- `SelectorGroupChat` - A team where a model is used to select the next agent to send a message based on the current conversation history.
:::{grid-item-card} {fas}`graduation-cap;pst-color-primary` Tutorial
:link: ./tutorial/index.html
Step-by-step guide to using AgentChat
:::
:::{grid-item-card} {fas}`code;pst-color-primary` Examples
:link: ./examples/index.html
Sample code and use cases
:::
::::
```{toctree}
:maxdepth: 1
:hidden:
installation
quickstart
guides/index
tutorial/index
examples/index
```

View File

@ -0,0 +1,78 @@
---
myst:
html_meta:
"description lang=en": |
Installing AutoGen AgentChat
---
# Installation
## Create a virtual environment (optional)
When installing AgentChat locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AgentChat are isolated from the rest of your system.
``````{tab-set}
`````{tab-item} venv
Create and activate:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
To deactivate later, run:
```bash
deactivate
```
`````
`````{tab-item} conda
[Install Conda](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html) if you have not already.
Create and activate:
```bash
conda create -n autogen python=3.10
conda activate autogen
```
To deactivate later, run:
```bash
conda deactivate
```
`````
``````
## Intall the AgentChat package using pip:
Install the `autogen-agentchat` package using pip:
```bash
pip install autogen-agentchat==0.4.0dev1
```
## Install Docker for Code Execution
We recommend using Docker for code execution.
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
A simple example of how to use Docker for code execution is shown below:
<!-- ```{include} stocksnippet.md
``` -->
To learn more about agents that execute code, see the [agents tutorial](./tutorial/agents.ipynb).

View File

@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quickstart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{include} warning.md\n",
"\n",
"```\n",
"\n",
":::{note}\n",
"For installation instructions, please refer to the [installation guide](./installation).\n",
":::\n",
"\n",
"\n",
"\n",
"An agent is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. \n",
"\n",
"In AgentChat, agents can be rapidly implemented using preset agent configurations. To illustrate this, we will begin with creating an agent that can address tasks by responding to messages it receives. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:32.381165]:\u001b[0m\n",
"\n",
"What is the weather in New York?\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:33.698359], writing_agent:\u001b[0m\n",
"\n",
"The weather in New York is currently 73 degrees and sunny. TERMINATE\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:33.698749], Termination:\u001b[0m\n",
"\n",
"Maximal number of messages 1 reached, current message count: 1\n",
" TeamRunResult(messages=[TextMessage(source='user', content='What is the weather in New York?'), StopMessage(source='writing_agent', content='The weather in New York is currently 73 degrees and sunny. TERMINATE')])\n"
]
}
],
"source": [
"import logging\n",
"\n",
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
"from autogen_agentchat.agents import CodingAssistantAgent, ToolUseAssistantAgent\n",
"from autogen_agentchat.logging import ConsoleLogHandler\n",
"from autogen_agentchat.teams import MaxMessageTermination, RoundRobinGroupChat\n",
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
"from autogen_core.components.tools import FunctionTool\n",
"\n",
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
"logger.addHandler(ConsoleLogHandler())\n",
"logger.setLevel(logging.INFO)\n",
"\n",
"\n",
"# define a tool\n",
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 73 degrees and Sunny.\"\n",
"\n",
"\n",
"# wrap the tool for use with the agent\n",
"get_weather_tool = FunctionTool(get_weather, description=\"Get the weather for a city\")\n",
"\n",
"# define an agent\n",
"weather_agent = ToolUseAssistantAgent(\n",
" name=\"writing_agent\",\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-2024-08-06\"),\n",
" registered_tools=[get_weather_tool],\n",
")\n",
"\n",
"# add the agent to a team\n",
"agent_team = RoundRobinGroupChat([weather_agent])\n",
"result = await agent_team.run(\n",
" task=\"What is the weather in New York?\",\n",
" termination_condition=MaxMessageTermination(max_messages=1),\n",
")\n",
"print(\"\\n\", result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code snippet above introduces two high level concepts in AgentChat: `Agent` and `Team`. An Agent helps us define what actions are taken when a message is received. Specifically, we use the `ToolUseAssistantAgent` preset - an agent that can be given a function that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the `RoundRobinGroupChat` team, agents receive messages in a sequential round-robin fashion. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What's Next?\n",
"\n",
"Now that you have a basic understanding of how to define an agent and a team, consider following the [tutorial](./tutorial/index) for a walkthrough on other features of AgentChat.\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,49 +0,0 @@
---
myst:
html_meta:
"description lang=en": |
Quick Start Guide for AgentChat: Migrating from AutoGen 0.2x to 0.4x.
---
# Quick Start
AgentChat API, introduced in AutoGen 0.4x, offers a similar level of abstraction as the default Agent classes in AutoGen 0.2x.
## Installation
Install the `autogen-agentchat` package using pip:
```bash
pip install autogen-agentchat==0.4.0dev1
```
:::{note}
For further installation instructions, please refer to the [package information](pkg-info-autogen-agentchat).
:::
## Creating a Simple Agent Team
The following example illustrates creating a simple agent team with two agents that interact to solve a task.
1. `CodingAssistantAgent` that generates responses using an LLM model.
2. `CodeExecutorAgent` that executes code snippets and returns the output.
Because the `CodeExecutorAgent` uses a Docker command-line code executor to execute code snippets,
you need to have [Docker installed](https://docs.docker.com/engine/install/) and running on your machine.
The task is to "Create a plot of NVIDIA and TESLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'."
```{include} stocksnippet.md
```
```{tip}
AgentChat in v0.4x provides similar abstractions to the default agents in v0.2x. The `CodingAssistantAgent` and `CodeExecutorAgent` in v0.4x are equivalent to the `AssistantAgent` and `UserProxyAgent` with code execution in v0.2x.
```
If you are exploring migrating your code from AutoGen 0.2x to 0.4x, the following are some key differences to consider:
1. In v0.4x, agent interactions are managed by `Teams` (e.g., `RoundRobinGroupChat`), replacing direct chat initiation.
2. v0.4x uses async/await syntax for improved performance and scalability.
3. Configuration in v0.4x is more modular, with separate components for code execution and LLM clients.

View File

@ -1,54 +0,0 @@
``````{tab-set}
`````{tab-item} AgentChat (v0.4x)
```python
import asyncio
import logging
from autogen_agentchat import EVENT_LOGGER_NAME
from autogen_agentchat.agents import CodeExecutorAgent, CodingAssistantAgent
from autogen_agentchat.logging import ConsoleLogHandler
from autogen_agentchat.teams import RoundRobinGroupChat, StopMessageTermination
from autogen_ext.code_executors import DockerCommandLineCodeExecutor
from autogen_core.components.models import OpenAIChatCompletionClient
logger = logging.getLogger(EVENT_LOGGER_NAME)
logger.addHandler(ConsoleLogHandler())
logger.setLevel(logging.INFO)
async def main() -> None:
async with DockerCommandLineCodeExecutor(work_dir="coding") as code_executor:
code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor)
coding_assistant_agent = CodingAssistantAgent(
"coding_assistant", model_client=OpenAIChatCompletionClient(model="gpt-4o", api_key="YOUR_API_KEY")
)
group_chat = RoundRobinGroupChat([coding_assistant_agent, code_executor_agent])
result = await group_chat.run(
task="Create a plot of NVDIA and TSLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'.",
termination_condition=StopMessageTermination(),
)
asyncio.run(main())
```
`````
`````{tab-item} v0.2x
```python
from autogen.coding import DockerCommandLineCodeExecutor
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
llm_config = {"model": "gpt-4o", "api_type": "openai", "api_key": "YOUR_API_KEY"}
code_executor = DockerCommandLineCodeExecutor(work_dir="coding")
assistant = AssistantAgent("assistant", llm_config=llm_config)
code_executor_agent = UserProxyAgent(
"code_executor_agent",
code_execution_config={"executor": code_executor},
)
result = code_executor_agent.initiate_chat(
assistant,
message="Create a plot of NVIDIA and TESLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'.",
)
code_executor.stop()
```
`````
``````

View File

@ -0,0 +1,320 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"# Agents\n",
"\n",
"An agent is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. \n",
"\n",
"```{include} ../warning.md\n",
"\n",
"```\n",
"\n",
"AgentChat provides a set of preset Agents, each with variations in how an agent might respond to received messages. \n",
"\n",
"Each agent inherits from a {py:class}`~autogen_agentchat.agents.BaseChatAgent` class with a few generic properties:\n",
"\n",
"- `name`: The name of the agent. This is used by the team to uniquely identify the agent. It should be unique within the team.\n",
"- `description`: The description of the agent. This is used by the team to make decisions about which agents to use. The description should detail the agent's capabilities and how to interact with it.\n",
" \n",
"```{tip}\n",
"How do agents send and receive messages? \n",
"\n",
"AgentChat is built on the `autogen-core` package, which handles the details of sending and receiving messages. `autogen-core` provides a runtime environment, which facilitates communication between agents (message sending and delivery), manages their identities and lifecycles, and enforces security and privacy boundaries. \n",
"AgentChat handles the details of instantiating a runtime and registering agents with the runtime.\n",
"\n",
"To learn more about the runtime in `autogen-core`, see the [autogen-core documentation on agents and runtime](../../core-user-guide/framework/agent-and-agent-runtime.ipynb).\n",
"```\n",
"\n",
"Each agent also implements an {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method that defines the behavior of the agent in response to a message.\n",
"\n",
"\n",
"To begin, let us import the required classes and set up a model client that will be used by agents.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
"from autogen_agentchat.agents import CodingAssistantAgent, TextMessage, ToolUseAssistantAgent\n",
"from autogen_agentchat.logging import ConsoleLogHandler\n",
"from autogen_agentchat.teams import MaxMessageTermination, RoundRobinGroupChat, SelectorGroupChat\n",
"from autogen_core.base import CancellationToken\n",
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
"from autogen_core.components.tools import FunctionTool\n",
"\n",
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
"logger.addHandler(ConsoleLogHandler())\n",
"logger.setLevel(logging.INFO)\n",
"\n",
"\n",
"# Create an OpenAI model client.\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<!-- ## CodingAssistantAgent\n",
"\n",
"Generates responses (text and code) using an LLM upon receipt of a message. It takes a `system_message` argument that defines or sets the tone for how the agent's LLM should respond. \n",
"\n",
"```python\n",
"\n",
"writing_assistant_agent = CodingAssistantAgent(\n",
" name=\"writing_assistant_agent\",\n",
" system_message=\"You are a helpful assistant that solve tasks by generating text responses and code.\",\n",
" model_client=model_client,\n",
")\n",
"`\n",
"\n",
"We can explore or test the behavior of the agent by sending a message to it using the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method. \n",
"\n",
"```python\n",
"result = await writing_assistant_agent.on_messages(\n",
" messages=[\n",
" TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n",
" ],\n",
" cancellation_token=CancellationToken(),\n",
")\n",
"print(result) -->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## ToolUseAssistantAgent\n",
"\n",
"This agent responds to messages by making appropriate tool or function calls.\n",
"\n",
"```{tip}\n",
"Understanding Tool Calling\n",
"\n",
"Large Language Models (LLMs) are typically limited to generating text or code responses. However, many complex tasks benefit from the ability to use external tools that perform specific actions, such as fetching data from APIs or databases.\n",
"\n",
"To address this limitation, modern LLMs can now accept a list of available tool schemas (descriptions of tools and their arguments) and generate a tool call message. This capability is known as **Tool Calling** or **Function Calling** and is becoming a popular pattern in building intelligent agent-based applications.\n",
"\n",
"For more information on tool calling, refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use).\n",
"```\n",
"\n",
"To set up a ToolUseAssistantAgent in AgentChat, follow these steps:\n",
"\n",
"1. Define the tool, typically as a Python function.\n",
"2. Wrap the function in the `FunctionTool` class from the `autogen-core` package. This ensures the function schema can be correctly parsed and used for tool calling.\n",
"3. Attach the tool to the agent.\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 72 degrees and Sunny.\"\n",
"\n",
"\n",
"get_weather_tool = FunctionTool(get_weather, description=\"Get the weather for a city\")\n",
"\n",
"tool_use_agent = ToolUseAssistantAgent(\n",
" \"tool_use_agent\",\n",
" system_message=\"You are a helpful assistant that solves tasks by only using your tools.\",\n",
" model_client=model_client,\n",
" registered_tools=[get_weather_tool],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"source='tool_use_agent' content=\"Could you please specify a city in France for which you'd like to get the current weather?\"\n"
]
}
],
"source": [
"tool_result = await tool_use_agent.on_messages(\n",
" messages=[\n",
" TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n",
" ],\n",
" cancellation_token=CancellationToken(),\n",
")\n",
"print(tool_result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the response generated by the ToolUseAssistantAgent is a tool call message which can then be executed to get the right result. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## CodeExecutionAgent \n",
"\n",
"This agent preset extracts and executes code snippets found in received messages and returns the output. It is typically used within a team where a `CodingAssistantAgent` is also present - the `CodingAssistantAgent` can generate code snippets, which the `CodeExecutionAgent` receives and executes to make progress on a task. \n",
"\n",
"```{note}\n",
"It is recommended that the `CodeExecutionAgent` uses a Docker container to execute code snippets. This ensures that the code snippets are executed in a safe and isolated environment. To use Docker, your environment must have Docker installed and running. \n",
"If you do not have Docker installed, you can install it from the [Docker website](https://docs.docker.com/get-docker/) or alternatively skip the next cell.\n",
"```\n",
"\n",
"In the code snippet below, we show how to set up a `CodeExecutionAgent` that uses the `DockerCommandLineCodeExecutor` class to execute code snippets in a Docker container. The `work_dir` parameter indicates where all executed files are first saved locally before being executed in the Docker container.\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"source='code_executor' content='No code blocks found in the thread.'\n"
]
}
],
"source": [
"from autogen_agentchat.agents import CodeExecutorAgent\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"\n",
"async with DockerCommandLineCodeExecutor(work_dir=\"coding\") as code_executor: # type: ignore[syntax]\n",
" code_executor_agent = CodeExecutorAgent(\"code_executor\", code_executor=code_executor)\n",
" code_execution_result = await code_executor_agent.on_messages(\n",
" messages=[\n",
" TextMessage(content=\"Here is some code \\n ```python print('Hello world') \\n``` \", source=\"user\"),\n",
" ],\n",
" cancellation_token=CancellationToken(),\n",
" )\n",
" print(code_execution_result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building Custom Agents\n",
"\n",
"In many cases, you may have agents with custom behaviors that do not fall into any of the preset agent categories. In such cases, you can build custom agents by subclassing the {py:class}`~autogen_agentchat.agents.BaseChatAgent` class and implementing the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method.\n",
"\n",
"A common example is an agent that can be part of a team but primarily is driven by human input. Other examples include agents that respond with specific text, tool or function calls. \n",
"\n",
"In the example below we show hot to implement a `UserProxyAgent` - an agent that asks the user to enter some text and then returns that message as a response. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"source='user_proxy_agent' content='Hello there'\n"
]
}
],
"source": [
"import asyncio\n",
"from typing import Sequence\n",
"\n",
"from autogen_agentchat.agents import (\n",
" BaseChatAgent,\n",
" ChatMessage,\n",
" StopMessage,\n",
" TextMessage,\n",
")\n",
"\n",
"\n",
"class UserProxyAgent(BaseChatAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(name, \"A human user.\")\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> ChatMessage:\n",
" user_input = await asyncio.get_event_loop().run_in_executor(None, input, \"Enter your response: \")\n",
" if \"TERMINATE\" in user_input:\n",
" return StopMessage(content=\"User has terminated the conversation.\", source=self.name)\n",
" return TextMessage(content=user_input, source=self.name)\n",
"\n",
"\n",
"user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
"\n",
"user_proxy_agent_result = await user_proxy_agent.on_messages(\n",
" messages=[\n",
" TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n",
" ],\n",
" cancellation_token=CancellationToken(),\n",
")\n",
"print(user_proxy_agent_result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"So far, we have learned a few key concepts:\n",
"\n",
"- How to define agents \n",
"- How to send messages to agents by calling the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method on the {py:class}`~autogen_agentchat.agents.BaseChatAgent` class and viewing the agent's response \n",
"- An overview of the different types of agents available in AgentChat\n",
"- How to build custom agents\n",
"\n",
"However, the ability to address complex tasks is often best served by groups of agents that interact as a team. Let us review how to build these teams."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,54 @@
---
myst:
html_meta:
"description lang=en": |
Tutorial for AutoGen AgentChat, a framework for building multi-agent applications with AI agents.
---
# Tutorial
Tutorial to get started with AgentChat.
```{include} ../warning.md
```
::::{grid} 2 2 2 3
:gutter: 3
:::{grid-item-card} {fas}`book-open;pst-color-primary` Introduction
:link: ./introduction.html
Overview of agents and teams in AgentChat
:::
:::{grid-item-card} {fas}`users;pst-color-primary` Agents
:link: ./agents.html
Building agents that use LLMs, tools, and execute code.
:::
:::{grid-item-card} {fas}`users;pst-color-primary` Teams
:link: ./teams.html
Coordinating multiple agents in teams.
:::
:::{grid-item-card} {fas}`flag-checkered;pst-color-primary` Chat Termination
:link: ./termination.html
Determining when to end a task.
:::
::::
```{toctree}
:maxdepth: 1
:hidden:
introduction
agents
teams
termination
selector-group-chat
```

View File

@ -0,0 +1,120 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction\n",
"\n",
"\n",
"AgentChat provides intuitive defaults, such as **Agents** with preset behaviors and **Teams** with predefined communication protocols, to simplify building multi-agent applications.\n",
"\n",
":::{note}\n",
"For installation instructions, please refer to the [installation guide](../installation.md).\n",
":::\n",
"\n",
"\n",
"## Defining a Model Client \n",
"\n",
"In many cases, an agent will require access to a generative model. AgentChat utilizes the model clients provided by the [ `autogen-core`](../../core-user-guide/framework/model-clients.ipynb) package to access models."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
]
}
],
"source": [
"from autogen_core.components.models import OpenAIChatCompletionClient, UserMessage\n",
"\n",
"# Create an OpenAI model client.\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")\n",
"model_client_result = await model_client.create(\n",
" messages=[\n",
" UserMessage(content=\"What is the capital of France?\", source=\"user\"),\n",
" ]\n",
")\n",
"print(model_client_result) # \"Paris\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Handling Logs\n",
"\n",
"As agents interact with each other, they generate logs that can be useful in building and debugging multi-agent systems. Your application can consume these logs by attaching a log handler to the AgentChat events. AgentChat also provides a default log handler that writes logs to the console and file.\n",
"\n",
"Attache the log handler before running your application to view agent message logs. \n",
"\n",
"```{tip}\n",
"You can implement your own log handler and attach it to the AgentChat events.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
"from autogen_agentchat.logging import ConsoleLogHandler\n",
"\n",
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
"logger.addHandler(ConsoleLogHandler())\n",
"logger.setLevel(logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Whats Next ?\n",
"\n",
"Now that we have installed the `autogen-agentchat` package, let's move on to exploring how to build agents using the agent presets provided by the package."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,216 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Teams\n",
"\n",
"Teams define how groups of agents communicate to address tasks. AgentChat provides several preset team configurations to simplify building multi-agent applications.\n",
"\n",
"```{include} ../warning.md\n",
"\n",
"```\n",
"\n",
"A team may consist of a single agent or multiple agents. An important configuration for each team involves defining the order in which agents send messages and determining when the team should terminate.\n",
"\n",
"In the following section, we will begin by defining agents."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
"from autogen_agentchat.agents import CodingAssistantAgent, ToolUseAssistantAgent\n",
"from autogen_agentchat.logging import ConsoleLogHandler\n",
"from autogen_agentchat.teams import MaxMessageTermination, RoundRobinGroupChat, SelectorGroupChat\n",
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
"from autogen_core.components.tools import FunctionTool\n",
"\n",
"# Set up a log handler to print logs to the console.\n",
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
"logger.addHandler(ConsoleLogHandler())\n",
"logger.setLevel(logging.INFO)\n",
"\n",
"\n",
"# Create an OpenAI model client.\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")\n",
"\n",
"writing_assistant_agent = CodingAssistantAgent(\n",
" name=\"writing_assistant_agent\",\n",
" system_message=\"You are a helpful assistant that solve tasks by generating text responses and code.\",\n",
" model_client=model_client,\n",
")\n",
"\n",
"\n",
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 72 degrees and Sunny.\"\n",
"\n",
"\n",
"get_weather_tool = FunctionTool(get_weather, description=\"Get the weather for a city\")\n",
"\n",
"tool_use_agent = ToolUseAssistantAgent(\n",
" \"tool_use_agent\",\n",
" system_message=\"You are a helpful assistant that solves tasks by only using your tools.\",\n",
" model_client=model_client,\n",
" registered_tools=[get_weather_tool],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### RoundRobinGroupChat\n",
"\n",
"A team where agents take turns sending messages (in a round robin fashion) until a termination condition is met. "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:04.692283]:\u001b[0m\n",
"\n",
"Write a Haiku about the weather in Paris\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:05.961670], tool_use_agent:\u001b[0m\n",
"\n",
"Golden sun above, \n",
"Paris basks in warmth and light, \n",
"Seine flows in sunshine.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:05.962309], Termination:\u001b[0m\n",
"\n",
"Maximal number of messages 1 reached, current message count: 1"
]
}
],
"source": [
"round_robin_team = RoundRobinGroupChat([tool_use_agent, writing_assistant_agent])\n",
"round_robin_team_result = await round_robin_team.run(\n",
" \"Write a Haiku about the weather in Paris\", termination_condition=MaxMessageTermination(max_messages=1)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, we can define a team where the agents solve a problem by _writing and executing code_ in a round-robin fashion. \n",
"\n",
"```python \n",
"async with DockerCommandLineCodeExecutor(work_dir=\"coding\") as code_executor:\n",
" code_executor_agent = CodeExecutorAgent(\n",
" \"code_executor\", code_executor=code_executor)\n",
" code_execution_team = RoundRobinGroupChat([writing_assistant_agent, code_executor_agent])\n",
" code_execution_team_result = await code_execution_team.run(\"Create a plot of NVDIA and TSLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png\", termination_condition=MaxMessageTermination(max_messages=12))\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SelctorGroupChat\n",
"\n",
"A team where a generative model (LLM) is used to select the next agent to send a message based on the current conversation history.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:05.967894]:\u001b[0m\n",
"\n",
"What is the weather in paris right now? Also write a haiku about it.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:07.214716], tool_use_agent:\u001b[0m\n",
"\n",
"The weather in Paris is currently 72 degrees and Sunny.\n",
"\n",
"Here's a Haiku about it:\n",
"\n",
"Golden sun above, \n",
"Paris basks in warmth and light, \n",
"Seine flows in sunshine.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:08.320789], writing_assistant_agent:\u001b[0m\n",
"\n",
"I can't check the real-time weather, but you can use a weather website or app to find the current weather in Paris. If you need a fresh haiku, here's one for sunny weather:\n",
"\n",
"Paris bathed in sun, \n",
"Gentle warmth embraces all, \n",
"Seine sparkles with light.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-20T09:01:08.321296], Termination:\u001b[0m\n",
"\n",
"Maximal number of messages 2 reached, current message count: 2"
]
}
],
"source": [
"llm_team = SelectorGroupChat([tool_use_agent, writing_assistant_agent], model_client=model_client)\n",
"\n",
"llm_team_result = await llm_team.run(\n",
" \"What is the weather in paris right now? Also write a haiku about it.\",\n",
" termination_condition=MaxMessageTermination(max_messages=2),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What's Next?\n",
"\n",
"In this section, we reviewed how to define model clients, agents, and teams in AgentChat. Here are some other concepts to explore further:\n",
"\n",
"- Termination Conditions: Define conditions that determine when a team should stop running. In this sample, we used a `MaxMessageTermination` condition to stop the team after a certain number of messages. Explore other termination conditions supported in the AgentChat package."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Termination \n",
"\n",
"\n",
"In the previous section, we explored how to define agents, and organize them into teams that can solve tasks by communicating (a conversation). However, conversations can go on forever, and in many cases, we need to know _when_ to stop them. This is the role of the termination condition.\n",
"\n",
"AgentChat supports several termination condition by providing a base `TerminationCondition` class and several implementations that inherit from it.\n",
"\n",
"A termination condition is a callable that takes a sequence of ChatMessage objects since the last time the condition was called, and returns a StopMessage if the conversation should be terminated, or None otherwise. Once a termination condition has been reached, it must be reset before it can be used again.\n",
"\n",
"Some important things to note about termination conditions: \n",
"- They are stateful, and must be reset before they can be used again. \n",
"- They can be combined using the AND and OR operators. \n",
"- They are implemented/enforced by the team, and not by the agents. An agent may signal or request termination e.g., by sending a StopMessage, but the team is responsible for enforcing it.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To begin, let us define a simple team with only one agent and then explore how multiple termination conditions can be applied to guide the resulting behavior."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
"from autogen_agentchat.agents import CodingAssistantAgent\n",
"from autogen_agentchat.logging import ConsoleLogHandler\n",
"from autogen_agentchat.teams import MaxMessageTermination, RoundRobinGroupChat, StopMessageTermination\n",
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
"\n",
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
"logger.addHandler(ConsoleLogHandler())\n",
"logger.setLevel(logging.INFO)\n",
"\n",
"\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" temperature=1,\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")\n",
"\n",
"writing_assistant_agent = CodingAssistantAgent(\n",
" name=\"writing_assistant_agent\",\n",
" system_message=\"You are a helpful assistant that solve tasks by generating text responses and code.\",\n",
" model_client=model_client,\n",
")\n",
"\n",
"round_robin_team = RoundRobinGroupChat([writing_assistant_agent])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MaxMessageTermination \n",
"\n",
"The simplest termination condition is the `MaxMessageTermination` condition, which terminates the conversation after a fixed number of messages. \n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:28.807176]:\u001b[0m\n",
"\n",
"Write a unique, Haiku about the weather in Paris\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:29.604935], writing_assistant_agent:\u001b[0m\n",
"\n",
"Gentle rain whispers, \n",
"Eiffel veiled in mists embrace, \n",
"Springs soft sigh in France.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:30.168531], writing_assistant_agent:\u001b[0m\n",
"\n",
"Gentle rain whispers, \n",
"Eiffel veiled in mists embrace, \n",
"Springs soft sigh in France.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:31.213291], writing_assistant_agent:\u001b[0m\n",
"\n",
"Gentle rain whispers, \n",
"Eiffel veiled in mists embrace, \n",
"Springs soft sigh in France.\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:31.213655], Termination:\u001b[0m\n",
"\n",
"Maximal number of messages 3 reached, current message count: 3"
]
}
],
"source": [
"round_robin_team = RoundRobinGroupChat([writing_assistant_agent])\n",
"round_robin_team_result = await round_robin_team.run(\n",
" \"Write a unique, Haiku about the weather in Paris\", termination_condition=MaxMessageTermination(max_messages=3)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that the conversation is terminated after the specified number of messages have been sent by the agent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## StopMessageTermination\n",
"\n",
"In this scenario, the team terminates the conversation if any agent sends a `StopMessage`. So, when does an agent send a `StopMessage`? Typically, this is implemented in the `on_message` method of the agent, where the agent can check the incoming message and decide to send a `StopMessage` based on some condition. \n",
"\n",
"A common pattern here is prompt the agent (or some agent participating in the conversation) to emit a specific text string in it's response, which can be used to trigger the termination condition. \n",
"\n",
"In fact, if you review the code implementation for the default `CodingAssistantAgent` class provided by AgentChat, you will observe two things\n",
"- The default `system_message` instructs the agent to end their response with the word \"terminate\" if they deem the task to be completed\n",
"- in the `on_message` method, the agent checks if the incoming message contains the text \"terminate\" and returns a `StopMessage` if it does. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:31.218855]:\u001b[0m\n",
"\n",
"Write a unique, Haiku about the weather in Paris\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:31.752676], writing_assistant_agent:\u001b[0m\n",
"\n",
"Mist hugs the Eiffel, \n",
"Soft rain kisses cobblestones, \n",
"Autumn whispers past. \n",
"\n",
"TERMINATE\n",
"--------------------------------------------------------------------------- \n",
"\u001b[91m[2024-10-19T12:19:31.753265], Termination:\u001b[0m\n",
"\n",
"Stop message received"
]
}
],
"source": [
"writing_assistant_agent = CodingAssistantAgent(\n",
" name=\"writing_assistant_agent\",\n",
" system_message=\"You are a helpful assistant that solve tasks by generating text responses and code. Respond with TERMINATE when the task is done.\",\n",
" model_client=model_client,\n",
")\n",
"\n",
"\n",
"round_robin_team = RoundRobinGroupChat([writing_assistant_agent])\n",
"\n",
"round_robin_team_result = await round_robin_team.run(\n",
" \"Write a unique, Haiku about the weather in Paris\", termination_condition=StopMessageTermination()\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,5 @@
```{warning}
AgentChat is Work in Progress. APIs may change in future releases.
```