mirror of https://github.com/microsoft/autogen.git
Update agentchat tutorial, refactor content (#4118)
Resolves Tutorial Chapter for Custom ChatAgent #4114 -- updated tutorial chapter on agents and custom agents Update README example to use tool call Added "Models" section in AgentChat tutorial. Added place holder for Tutorial Chapter for Swarm #4113.
This commit is contained in:
parent
0e985d4b40
commit
12becdddb1
43
README.md
43
README.md
|
@ -101,31 +101,44 @@ We look forward to your contributions!
|
|||
First install the packages:
|
||||
|
||||
```bash
|
||||
pip install 'autogen-agentchat==0.4.0.dev4' 'autogen-ext[docker]==0.4.0.dev4'
|
||||
pip install 'autogen-agentchat==0.4.0.dev4' 'autogen-ext[openai]==0.4.0.dev4'
|
||||
```
|
||||
|
||||
The following code uses code execution, you need to have [Docker installed](https://docs.docker.com/engine/install/)
|
||||
and running on your machine.
|
||||
The following code uses OpenAI's GPT-4o model and you need to provide your
|
||||
API key to run.
|
||||
To use Azure OpenAI models, follow the instruction
|
||||
[here](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.html).
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from autogen_ext.code_executors import DockerCommandLineCodeExecutor
|
||||
from autogen_ext.models import OpenAIChatCompletionClient
|
||||
from autogen_agentchat.agents import CodeExecutorAgent, CodingAssistantAgent
|
||||
from autogen_agentchat.agents import AssistantAgent
|
||||
from autogen_agentchat.task import Console, TextMentionTermination
|
||||
from autogen_agentchat.teams import RoundRobinGroupChat
|
||||
from autogen_agentchat.task import TextMentionTermination, Console
|
||||
from autogen_ext.models import OpenAIChatCompletionClient
|
||||
|
||||
# Define a tool
|
||||
async def get_weather(city: str) -> str:
|
||||
return f"The weather in {city} is 73 degrees and Sunny."
|
||||
|
||||
async def main() -> None:
|
||||
async with DockerCommandLineCodeExecutor(work_dir="coding") as code_executor:
|
||||
code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor)
|
||||
coding_assistant_agent = CodingAssistantAgent(
|
||||
"coding_assistant", model_client=OpenAIChatCompletionClient(model="gpt-4o", api_key="YOUR_API_KEY")
|
||||
# Define an agent
|
||||
weather_agent = AssistantAgent(
|
||||
name="weather_agent",
|
||||
model_client=OpenAIChatCompletionClient(
|
||||
model="gpt-4o-2024-08-06",
|
||||
# api_key="YOUR_API_KEY",
|
||||
),
|
||||
tools=[get_weather],
|
||||
)
|
||||
|
||||
# Define termination condition
|
||||
termination = TextMentionTermination("TERMINATE")
|
||||
group_chat = RoundRobinGroupChat([coding_assistant_agent, code_executor_agent], termination_condition=termination)
|
||||
stream = group_chat.run_stream(
|
||||
task="Create a plot of NVDIA and TSLA stock returns YTD from 2024-01-01 and save it to 'nvidia_tesla_2024_ytd.png'."
|
||||
)
|
||||
|
||||
# Define a team
|
||||
agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)
|
||||
|
||||
# Run the team and stream messages to the console
|
||||
stream = agent_team.run_stream(task="What is the weather in New York?")
|
||||
await Console(stream)
|
||||
|
||||
asyncio.run(main())
|
||||
|
|
|
@ -2,14 +2,20 @@
|
|||
myst:
|
||||
html_meta:
|
||||
"description lang=en": |
|
||||
User Guide for AgentChat, a high-level api for AutoGen
|
||||
User Guide for AgentChat, a high-level API for AutoGen
|
||||
---
|
||||
|
||||
# AgentChat
|
||||
|
||||
AgentChat is a high-level package for building multi-agent applications built on top of the [ `autogen-core`](../core-user-guide/index.md) package. For beginner users, AgentChat is the recommended starting point. For advanced users, [ `autogen-core`](../core-user-guide/index.md) provides more flexibility and control over the underlying components.
|
||||
AgentChat is a high-level API for building multi-agent applications.
|
||||
It is built on top of the [`autogen-core`](../core-user-guide/index.md) package.
|
||||
For beginner users, AgentChat is the recommended starting point.
|
||||
For advanced users, [`autogen-core`](../core-user-guide/index.md)'s event-driven
|
||||
programming model provides more flexibility and control over the underlying components.
|
||||
|
||||
AgentChat aims to provide intuitive defaults, such as **Agents** with preset behaviors and **Teams** with predefined communication protocols, to simplify building multi-agent applications.
|
||||
AgentChat aims to provide intuitive defaults, such as **Agents** with preset
|
||||
behaviors and **Teams** with predefined [multi-agent design patterns](../core-user-guide/design-patterns/index.md).
|
||||
to simplify building multi-agent applications.
|
||||
|
||||
```{include} warning.md
|
||||
|
||||
|
|
|
@ -64,6 +64,15 @@ Install the `autogen-agentchat` package using pip:
|
|||
pip install 'autogen-agentchat==0.4.0.dev4'
|
||||
```
|
||||
|
||||
## Install OpenAI for Model Client
|
||||
|
||||
To use the OpenAI and Azure OpenAI models, you need to install the following
|
||||
extensions:
|
||||
|
||||
```bash
|
||||
pip install 'autogen-ext[openai]==0.4.0.dev4'
|
||||
```
|
||||
|
||||
## Install Docker for Code Execution
|
||||
|
||||
We recommend using Docker for code execution.
|
||||
|
|
|
@ -19,11 +19,33 @@
|
|||
"For installation instructions, please refer to the [installation guide](./installation).\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"In AutoGen AgentChat, you can build applications quickly using preset agents.\n",
|
||||
"To illustrate this, we will begin with creating a team of a single agent\n",
|
||||
"that can use tools and respond to messages.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"An agent is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. \n",
|
||||
"\n",
|
||||
"In AgentChat, agents can be rapidly implemented using preset agent configurations. To illustrate this, we will begin with creating an agent that can address tasks by responding to messages it receives. "
|
||||
"The following code uses the OpenAI model. If you haven't already, you need to\n",
|
||||
"install the following package and extension:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install 'autogen-agentchat==0.4.0.dev4' 'autogen-ext[openai]==0.4.0.dev4'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To use Azure OpenAI models and AAD authentication,\n",
|
||||
"you can follow the instructions [here](./tutorial/models.ipynb#azure-openai)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -73,7 +95,10 @@
|
|||
" # Define an agent\n",
|
||||
" weather_agent = AssistantAgent(\n",
|
||||
" name=\"weather_agent\",\n",
|
||||
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-2024-08-06\"),\n",
|
||||
" model_client=OpenAIChatCompletionClient(\n",
|
||||
" model=\"gpt-4o-2024-08-06\",\n",
|
||||
" # api_key=\"YOUR_API_KEY\",\n",
|
||||
" ),\n",
|
||||
" tools=[get_weather],\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
|
@ -83,7 +108,7 @@
|
|||
" # Define a team\n",
|
||||
" agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)\n",
|
||||
"\n",
|
||||
" # Run the team and stream messages\n",
|
||||
" # Run the team and stream messages to the console\n",
|
||||
" stream = agent_team.run_stream(task=\"What is the weather in New York?\")\n",
|
||||
" await Console(stream)\n",
|
||||
"\n",
|
||||
|
@ -96,7 +121,8 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The code snippet above introduces two high level concepts in AgentChat: `Agent` and `Team`. An Agent helps us define what actions are taken when a message is received. Specifically, we use the `AssistantAgent` preset - an agent that can be given tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the `RoundRobinGroupChat` team, agents receive messages in a sequential round-robin fashion. "
|
||||
"The code snippet above introduces two high level concepts in AgentChat: *Agent* and *Team*. An Agent helps us define what actions are taken when a message is received. Specifically, we use the {py:class}`~autogen_agentchat.agents.AssistantAgent` preset - an agent that can be given access to a model (e.g., LLM) and tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team, agents respond in a sequential round-robin fashion.\n",
|
||||
"In this case, we have a single agent, so the same agent is used for each round."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -4,66 +4,160 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"# Agents\n",
|
||||
"\n",
|
||||
"An agent is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. \n",
|
||||
"\n",
|
||||
"```{include} ../warning.md\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"AgentChat provides a set of preset Agents, each with variations in how an agent might respond to received messages. \n",
|
||||
"AutoGen AgentChat provides a set of preset Agents, each with variations in how an agent might respond to messages.\n",
|
||||
"All agents share the following attributes and methods:\n",
|
||||
"\n",
|
||||
"Each agent inherits from a {py:class}`~autogen_agentchat.base.BaseChatAgent` class with a few generic properties:\n",
|
||||
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.name`: The unique name of the agent.\n",
|
||||
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.description`: The description of the agent in text.\n",
|
||||
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: Send the agent a sequence of {py:class}`~autogen_agentchat.messages.ChatMessage` get a {py:class}`~autogen_agentchat.base.Response`.\n",
|
||||
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`: Same as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` but returns an iterator of {py:class}`~autogen_agentchat.messages.AgentMessage` followed by a {py:class}`~autogen_agentchat.base.Response` as the last item.\n",
|
||||
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.reset`: Reset the agent to its initial state.\n",
|
||||
"\n",
|
||||
"- `name`: The name of the agent. This is used by the team to uniquely identify the agent. It should be unique within the team.\n",
|
||||
"- `description`: The description of the agent. This is used by the team to make decisions about which agents to use. The description should detail the agent's capabilities and how to interact with it.\n",
|
||||
" \n",
|
||||
"```{tip}\n",
|
||||
"How do agents send and receive messages? \n",
|
||||
"\n",
|
||||
"AgentChat is built on the `autogen-core` package, which handles the details of sending and receiving messages. `autogen-core` provides a runtime environment, which facilitates communication between agents (message sending and delivery), manages their identities and lifecycles, and enforces security and privacy boundaries. \n",
|
||||
"AgentChat handles the details of instantiating a runtime and registering agents with the runtime.\n",
|
||||
"\n",
|
||||
"To learn more about the runtime in `autogen-core`, see the [autogen-core documentation on agents and runtime](../../core-user-guide/framework/agent-and-agent-runtime.ipynb).\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Each agent also implements an {py:meth}`~autogen_agentchat.base.BaseChatAgent.on_messages` method that defines the behavior of the agent in response to a message.\n",
|
||||
"See {py:mod}`autogen_agentchat.messages` for more information on AgentChat message types.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"To begin, let us import the required classes and set up a model client that will be used by agents.\n"
|
||||
"## Assistant Agent\n",
|
||||
"\n",
|
||||
"{py:class}`~autogen_agentchat.agents.AssistantAgent` is a built-in agent that\n",
|
||||
"uses a language model with ability to use tools."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"\n",
|
||||
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
|
||||
"from autogen_agentchat.agents import ToolUseAssistantAgent\n",
|
||||
"from autogen_agentchat.logging import ConsoleLogHandler\n",
|
||||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||||
"from autogen_agentchat.messages import TextMessage\n",
|
||||
"from autogen_core.base import CancellationToken\n",
|
||||
"from autogen_core.components.models import OpenAIChatCompletionClient\n",
|
||||
"from autogen_core.components.tools import FunctionTool\n",
|
||||
"\n",
|
||||
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
|
||||
"logger.addHandler(ConsoleLogHandler())\n",
|
||||
"logger.setLevel(logging.INFO)\n",
|
||||
"from autogen_ext.models import OpenAIChatCompletionClient\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Create an OpenAI model client.\n",
|
||||
"# Define a tool that searches the web for information.\n",
|
||||
"async def web_search(query: str) -> str:\n",
|
||||
" \"\"\"Find information on the web\"\"\"\n",
|
||||
" return \"AutoGen is a programming framework for building multi-agent applications.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Create an agent that uses the OpenAI GPT-4o model.\n",
|
||||
"model_client = OpenAIChatCompletionClient(\n",
|
||||
" model=\"gpt-4o-2024-08-06\",\n",
|
||||
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
|
||||
" model=\"gpt-4o\",\n",
|
||||
" # api_key=\"YOUR_API_KEY\",\n",
|
||||
")\n",
|
||||
"agent = AssistantAgent(\n",
|
||||
" name=\"assistant\",\n",
|
||||
" model_client=model_client,\n",
|
||||
" tools=[web_search],\n",
|
||||
" system_message=\"Use tools to solve tasks.\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can call the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` \n",
|
||||
"method to get the agent to respond to a message."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_hqVC7UJUPhKaiJwgVKkg66ak', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_hqVC7UJUPhKaiJwgVKkg66ak')])]\n",
|
||||
"source='assistant' models_usage=RequestUsage(prompt_tokens=92, completion_tokens=14) content='AutoGen is a programming framework designed for building multi-agent applications.'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async def assistant_run() -> None:\n",
|
||||
" response = await agent.on_messages(\n",
|
||||
" [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n",
|
||||
" cancellation_token=CancellationToken(),\n",
|
||||
" )\n",
|
||||
" print(response.inner_messages)\n",
|
||||
" print(response.chat_message)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(assistant_run()) when running in a script.\n",
|
||||
"await assistant_run()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The call to the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` method\n",
|
||||
"returns a {py:class}`~autogen_agentchat.base.Response`\n",
|
||||
"that contains the agent's final response in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute,\n",
|
||||
"as well as a list of inner messages in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` attribute,\n",
|
||||
"which stores the agent's \"thought process\" that led to the final response.\n",
|
||||
"\n",
|
||||
"### Stream Messages\n",
|
||||
"\n",
|
||||
"We can also stream each message as it is generated by the agent by using the\n",
|
||||
"{py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"source='assistant' models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15) content=[FunctionCall(id='call_fXhM4PeZsodhhUOlNiFkoBXF', arguments='{\"query\":\"AutoGen\"}', name='web_search')]\n",
|
||||
"source='assistant' models_usage=None content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fXhM4PeZsodhhUOlNiFkoBXF')]\n",
|
||||
"Response(chat_message=TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=92, completion_tokens=31), content='AutoGen is a programming framework designed for building multi-agent applications. If you need more specific information about its features or usage, feel free to ask!'), inner_messages=[ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_fXhM4PeZsodhhUOlNiFkoBXF', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fXhM4PeZsodhhUOlNiFkoBXF')])])\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async def assistant_run_stream() -> None:\n",
|
||||
" async for message in agent.on_messages_stream(\n",
|
||||
" [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n",
|
||||
" cancellation_token=CancellationToken(),\n",
|
||||
" ):\n",
|
||||
" print(message)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(assistant_run_stream()) when running in a script.\n",
|
||||
"await assistant_run_stream()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method\n",
|
||||
"returns an asynchronous generator that yields each inner message generated by the agent,\n",
|
||||
"and the last item is the final response message in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute.\n",
|
||||
"\n",
|
||||
"From the messages, you can see the assistant agent used the `web_search` tool to\n",
|
||||
"search for information and responded using the search results.\n",
|
||||
"\n",
|
||||
"### Understanding Tool Calling\n",
|
||||
"\n",
|
||||
"Large Language Models (LLMs) are typically limited to generating text or code responses. However, many complex tasks benefit from the ability to use external tools that perform specific actions, such as fetching data from APIs or databases.\n",
|
||||
"\n",
|
||||
"To address this limitation, modern LLMs can now accept a list of available tool schemas (descriptions of tools and their arguments) and generate a tool call message. This capability is known as **Tool Calling** or **Function Calling** and is becoming a popular pattern in building intelligent agent-based applications.\n",
|
||||
"\n",
|
||||
"For more information on tool calling, refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -97,140 +191,20 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## CodeExecutorAgent\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## ToolUseAssistantAgent\n",
|
||||
"\n",
|
||||
"This agent responds to messages by making appropriate tool or function calls.\n",
|
||||
"\n",
|
||||
"```{tip}\n",
|
||||
"Understanding Tool Calling\n",
|
||||
"\n",
|
||||
"Large Language Models (LLMs) are typically limited to generating text or code responses. However, many complex tasks benefit from the ability to use external tools that perform specific actions, such as fetching data from APIs or databases.\n",
|
||||
"\n",
|
||||
"To address this limitation, modern LLMs can now accept a list of available tool schemas (descriptions of tools and their arguments) and generate a tool call message. This capability is known as **Tool Calling** or **Function Calling** and is becoming a popular pattern in building intelligent agent-based applications.\n",
|
||||
"\n",
|
||||
"For more information on tool calling, refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use).\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"To set up a ToolUseAssistantAgent in AgentChat, follow these steps:\n",
|
||||
"\n",
|
||||
"1. Define the tool, typically as a Python function.\n",
|
||||
"2. Wrap the function in the `FunctionTool` class from the `autogen-core` package. This ensures the function schema can be correctly parsed and used for tool calling.\n",
|
||||
"3. Attach the tool to the agent.\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"async def get_weather(city: str) -> str:\n",
|
||||
" return f\"The weather in {city} is 72 degrees and Sunny.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"get_weather_tool = FunctionTool(get_weather, description=\"Get the weather for a city\")\n",
|
||||
"\n",
|
||||
"tool_use_agent = ToolUseAssistantAgent(\n",
|
||||
" \"tool_use_agent\",\n",
|
||||
" system_message=\"You are a helpful assistant that solves tasks by only using your tools.\",\n",
|
||||
" model_client=model_client,\n",
|
||||
" registered_tools=[get_weather_tool],\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"source='tool_use_agent' content=\"Could you please specify a city in France for which you'd like to get the current weather?\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tool_result = await tool_use_agent.on_messages(\n",
|
||||
" messages=[\n",
|
||||
" TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n",
|
||||
" ],\n",
|
||||
" cancellation_token=CancellationToken(),\n",
|
||||
")\n",
|
||||
"print(tool_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see that the response generated by the ToolUseAssistantAgent is a tool call message which can then be executed to get the right result. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"## CodeExecutionAgent \n",
|
||||
"\n",
|
||||
"This agent preset extracts and executes code snippets found in received messages and returns the output. It is typically used within a team where a `CodingAssistantAgent` is also present - the `CodingAssistantAgent` can generate code snippets, which the `CodeExecutionAgent` receives and executes to make progress on a task. \n",
|
||||
"The {py:class}`~autogen_agentchat.agents.CodeExecutorAgent`\n",
|
||||
"preset extracts and executes code snippets found in received messages and returns the output. It is typically used within a team with another agent that generates code snippets to be executed.\n",
|
||||
"\n",
|
||||
"```{note}\n",
|
||||
"It is recommended that the `CodeExecutionAgent` uses a Docker container to execute code snippets. This ensures that the code snippets are executed in a safe and isolated environment. To use Docker, your environment must have Docker installed and running. \n",
|
||||
"If you do not have Docker installed, you can install it from the [Docker website](https://docs.docker.com/get-docker/) or alternatively skip the next cell.\n",
|
||||
"It is recommended that the {py:class}`~autogen_agentchat.agents.CodeExecutorAgent` agent\n",
|
||||
"uses a Docker container to execute code. This ensures that model-generated code is executed in an isolated environment. To use Docker, your environment must have Docker installed and running. \n",
|
||||
"Follow the installation instructions for [Docker](https://docs.docker.com/get-docker/).\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"In the code snippet below, we show how to set up a `CodeExecutionAgent` that uses the `DockerCommandLineCodeExecutor` class to execute code snippets in a Docker container. The `work_dir` parameter indicates where all executed files are first saved locally before being executed in the Docker container.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"source='code_executor' content='No code blocks found in the thread.'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import CodeExecutorAgent\n",
|
||||
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
|
||||
"\n",
|
||||
"async with DockerCommandLineCodeExecutor(work_dir=\"coding\") as code_executor: # type: ignore[syntax]\n",
|
||||
" code_executor_agent = CodeExecutorAgent(\"code_executor\", code_executor=code_executor)\n",
|
||||
" code_execution_result = await code_executor_agent.on_messages(\n",
|
||||
" messages=[\n",
|
||||
" TextMessage(content=\"Here is some code \\n ```python print('Hello world') \\n``` \", source=\"user\"),\n",
|
||||
" ],\n",
|
||||
" cancellation_token=CancellationToken(),\n",
|
||||
" )\n",
|
||||
" print(code_execution_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Building Custom Agents\n",
|
||||
"\n",
|
||||
"In many cases, you may have agents with custom behaviors that do not fall into any of the preset agent categories. In such cases, you can build custom agents by subclassing the {py:class}`~autogen_agentchat.agents.BaseChatAgent` class and implementing the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method.\n",
|
||||
"\n",
|
||||
"A common example is an agent that can be part of a team but primarily is driven by human input. Other examples include agents that respond with specific text, tool or function calls. \n",
|
||||
"\n",
|
||||
"In the example below we show how to implement a `UserProxyAgent` - an agent that asks the user to enter some text and then returns that message as a response. "
|
||||
"In this example, we show how to set up a {py:class}`~autogen_agentchat.agents.CodeExecutorAgent` agent that uses the \n",
|
||||
"{py:class}`~autogen_ext.code_executors.DockerCommandLineCodeExecutor` \n",
|
||||
"to execute code snippets in a Docker container. The `work_dir` parameter indicates where all executed files are first saved locally before being executed in the Docker container."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -242,22 +216,182 @@
|
|||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"source='user_proxy_agent' content='Hello there'\n"
|
||||
"source='code_executor' models_usage=None content='Hello world\\n'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import CodeExecutorAgent\n",
|
||||
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_code_executor_agent() -> None:\n",
|
||||
" # Create a code executor agent that uses a Docker container to execute code.\n",
|
||||
" code_executor = DockerCommandLineCodeExecutor(work_dir=\"coding\")\n",
|
||||
" await code_executor.start()\n",
|
||||
" code_executor_agent = CodeExecutorAgent(\"code_executor\", code_executor=code_executor)\n",
|
||||
"\n",
|
||||
" # Run the agent with a given code snippet.\n",
|
||||
" task = TextMessage(\n",
|
||||
" content=\"\"\"Here is some code\n",
|
||||
"```python\n",
|
||||
"print('Hello world')\n",
|
||||
"```\n",
|
||||
"\"\"\",\n",
|
||||
" source=\"user\",\n",
|
||||
" )\n",
|
||||
" response = await code_executor_agent.on_messages([task], CancellationToken())\n",
|
||||
" print(response.chat_message)\n",
|
||||
"\n",
|
||||
" # Stop the code executor.\n",
|
||||
" await code_executor.stop()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(run_code_executor_agent()) when running in a script.\n",
|
||||
"await run_code_executor_agent()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This example shows the agent executing a code snippet that prints \"Hello world\".\n",
|
||||
"The agent then returns the output of the code execution."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Build Your Own Agents\n",
|
||||
"\n",
|
||||
"You may have agents with behaviors that do not fall into a preset. \n",
|
||||
"In such cases, you can build custom agents.\n",
|
||||
"\n",
|
||||
"All agents in AgentChat inherit from {py:class}`~autogen_agentchat.agents.BaseChatAgent` \n",
|
||||
"class and implement the following abstract methods and attributes:\n",
|
||||
"\n",
|
||||
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
|
||||
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
|
||||
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
|
||||
"\n",
|
||||
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
|
||||
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
|
||||
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
|
||||
"yields all messages in the response."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### CounterDownAgent\n",
|
||||
"\n",
|
||||
"In this example, we create a simple agent that counts down from a given number to zero,\n",
|
||||
"and produces a stream of messages with the current count."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"3...\n",
|
||||
"2...\n",
|
||||
"1...\n",
|
||||
"Done!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from typing import AsyncGenerator, List, Sequence\n",
|
||||
"\n",
|
||||
"from autogen_agentchat.agents import BaseChatAgent\n",
|
||||
"from autogen_agentchat.base import Response\n",
|
||||
"from autogen_agentchat.messages import AgentMessage, ChatMessage\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class CountDownAgent(BaseChatAgent):\n",
|
||||
" def __init__(self, name: str, count: int = 3):\n",
|
||||
" super().__init__(name, \"A simple agent that counts down.\")\n",
|
||||
" self._count = count\n",
|
||||
"\n",
|
||||
" @property\n",
|
||||
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
|
||||
" return [TextMessage]\n",
|
||||
"\n",
|
||||
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
|
||||
" # Calls the on_messages_stream.\n",
|
||||
" response: Response | None = None\n",
|
||||
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
|
||||
" if isinstance(message, Response):\n",
|
||||
" response = message\n",
|
||||
" assert response is not None\n",
|
||||
" return response\n",
|
||||
"\n",
|
||||
" async def on_messages_stream(\n",
|
||||
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
|
||||
" ) -> AsyncGenerator[AgentMessage | Response, None]:\n",
|
||||
" inner_messages: List[AgentMessage] = []\n",
|
||||
" for i in range(self._count, 0, -1):\n",
|
||||
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
|
||||
" inner_messages.append(msg)\n",
|
||||
" yield msg\n",
|
||||
" # The response is returned at the end of the stream.\n",
|
||||
" # It contains the final message and all the inner messages.\n",
|
||||
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
|
||||
"\n",
|
||||
" async def reset(self, cancellation_token: CancellationToken) -> None:\n",
|
||||
" pass\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_countdown_agent() -> None:\n",
|
||||
" # Create a countdown agent.\n",
|
||||
" countdown_agent = CountDownAgent(\"countdown\")\n",
|
||||
"\n",
|
||||
" # Run the agent with a given task and stream the response.\n",
|
||||
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
|
||||
" if isinstance(message, Response):\n",
|
||||
" print(message.chat_message.content)\n",
|
||||
" else:\n",
|
||||
" print(message.content)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
|
||||
"await run_countdown_agent()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### UserProxyAgent \n",
|
||||
"\n",
|
||||
"A common use case for building a custom agent is to create an agent that acts as a proxy for the user.\n",
|
||||
"\n",
|
||||
"In the example below we show how to implement a `UserProxyAgent` - an agent that asks the user to enter\n",
|
||||
"some text through console and then returns that message as a response."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"I am glad to be here.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"from typing import List, Sequence\n",
|
||||
"\n",
|
||||
"from autogen_agentchat.agents import BaseChatAgent\n",
|
||||
"from autogen_agentchat.base import Response\n",
|
||||
"from autogen_agentchat.messages import (\n",
|
||||
" ChatMessage,\n",
|
||||
" StopMessage,\n",
|
||||
" TextMessage,\n",
|
||||
")\n",
|
||||
"from autogen_core.base import CancellationToken\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class UserProxyAgent(BaseChatAgent):\n",
|
||||
|
@ -266,42 +400,24 @@
|
|||
"\n",
|
||||
" @property\n",
|
||||
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
|
||||
" return [TextMessage, StopMessage]\n",
|
||||
" return [TextMessage]\n",
|
||||
"\n",
|
||||
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
|
||||
" user_input = await asyncio.get_event_loop().run_in_executor(None, input, \"Enter your response: \")\n",
|
||||
" if \"TERMINATE\" in user_input:\n",
|
||||
" return Response(chat_message=StopMessage(content=\"User has terminated the conversation.\", source=self.name))\n",
|
||||
" return Response(chat_message=TextMessage(content=user_input, source=self.name))\n",
|
||||
"\n",
|
||||
" async def reset(self, cancellation_token: CancellationToken) -> None:\n",
|
||||
" pass\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
|
||||
"async def run_user_proxy_agent() -> None:\n",
|
||||
" user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
|
||||
" response = await user_proxy_agent.on_messages([], CancellationToken())\n",
|
||||
" print(response.chat_message.content)\n",
|
||||
"\n",
|
||||
"user_proxy_agent_result = await user_proxy_agent.on_messages(\n",
|
||||
" messages=[\n",
|
||||
" TextMessage(content=\"What is the weather right now in France?\", source=\"user\"),\n",
|
||||
" ],\n",
|
||||
" cancellation_token=CancellationToken(),\n",
|
||||
")\n",
|
||||
"print(user_proxy_agent_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Summary\n",
|
||||
"So far, we have learned a few key concepts:\n",
|
||||
"\n",
|
||||
"- How to define agents \n",
|
||||
"- How to send messages to agents by calling the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method on the {py:class}`~autogen_agentchat.agents.BaseChatAgent` class and viewing the agent's response \n",
|
||||
"- An overview of the different types of agents available in AgentChat\n",
|
||||
"- How to build custom agents\n",
|
||||
"\n",
|
||||
"However, the ability to address complex tasks is often best served by groups of agents that interact as a team. Let us review how to build these teams."
|
||||
"# Use asyncio.run(run_user_proxy_agent()) when running in a script.\n",
|
||||
"await run_user_proxy_agent()"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -321,7 +437,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.6"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -16,28 +16,34 @@ Tutorial to get started with AgentChat.
|
|||
::::{grid} 2 2 2 3
|
||||
:gutter: 3
|
||||
|
||||
:::{grid-item-card} {fas}`book-open;pst-color-primary` Introduction
|
||||
:link: ./introduction.html
|
||||
:::{grid-item-card} {fas}`book-open;pst-color-primary` Models
|
||||
:link: ./models.html
|
||||
|
||||
Overview of agents and teams in AgentChat
|
||||
Setting up model clients for agents and teams.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`users;pst-color-primary` Agents
|
||||
:link: ./agents.html
|
||||
|
||||
Building agents that use LLMs, tools, and execute code.
|
||||
Building agents that use models, tools, and code executors.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`users;pst-color-primary` Teams
|
||||
:::{grid-item-card} {fas}`users;pst-color-primary` Teams Intro
|
||||
:link: ./teams.html
|
||||
|
||||
Coordinating multiple agents in teams.
|
||||
Introduction to teams and task termination.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`flag-checkered;pst-color-primary` Chat Termination
|
||||
:link: ./termination.html
|
||||
:::{grid-item-card} {fas}`users;pst-color-primary` Selector Group Chat
|
||||
:link: ./selector-group-chat.html
|
||||
|
||||
Determining when to end a task.
|
||||
A smart team that uses a model-based strategy and custom selector.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`users;pst-color-primary` Swarm
|
||||
:link: ./swarm.html
|
||||
|
||||
A dynamic team that uses handoffs to pass tasks between agents.
|
||||
:::
|
||||
|
||||
::::
|
||||
|
@ -46,9 +52,10 @@ Determining when to end a task.
|
|||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
introduction
|
||||
models
|
||||
agents
|
||||
teams
|
||||
termination
|
||||
selector-group-chat
|
||||
swarm
|
||||
termination
|
||||
```
|
||||
|
|
|
@ -1,120 +0,0 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Introduction\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"AgentChat provides intuitive defaults, such as **Agents** with preset behaviors and **Teams** with predefined communication protocols, to simplify building multi-agent applications.\n",
|
||||
"\n",
|
||||
":::{note}\n",
|
||||
"For installation instructions, please refer to the [installation guide](../installation.md).\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Defining a Model Client \n",
|
||||
"\n",
|
||||
"In many cases, an agent will require access to a generative model. AgentChat utilizes the model clients provided by the [ `autogen-core`](../../core-user-guide/framework/model-clients.ipynb) package to access models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen_core.components.models import OpenAIChatCompletionClient, UserMessage\n",
|
||||
"\n",
|
||||
"# Create an OpenAI model client.\n",
|
||||
"model_client = OpenAIChatCompletionClient(\n",
|
||||
" model=\"gpt-4o-2024-08-06\",\n",
|
||||
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
|
||||
")\n",
|
||||
"model_client_result = await model_client.create(\n",
|
||||
" messages=[\n",
|
||||
" UserMessage(content=\"What is the capital of France?\", source=\"user\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"print(model_client_result) # \"Paris\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Handling Logs\n",
|
||||
"\n",
|
||||
"As agents interact with each other, they generate logs that can be useful in building and debugging multi-agent systems. Your application can consume these logs by attaching a log handler to the AgentChat events. AgentChat also provides a default log handler that writes logs to the console and file.\n",
|
||||
"\n",
|
||||
"Attache the log handler before running your application to view agent message logs. \n",
|
||||
"\n",
|
||||
"```{tip}\n",
|
||||
"You can implement your own log handler and attach it to the AgentChat events.\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"\n",
|
||||
"from autogen_agentchat import EVENT_LOGGER_NAME\n",
|
||||
"from autogen_agentchat.logging import ConsoleLogHandler\n",
|
||||
"\n",
|
||||
"logger = logging.getLogger(EVENT_LOGGER_NAME)\n",
|
||||
"logger.addHandler(ConsoleLogHandler())\n",
|
||||
"logger.setLevel(logging.INFO)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Whats Next ?\n",
|
||||
"\n",
|
||||
"Now that we have installed the `autogen-agentchat` package, let's move on to exploring how to build agents using the agent presets provided by the package."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -0,0 +1,181 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Models\n",
|
||||
"\n",
|
||||
"In many cases, agents need access to model services such as OpenAI, Azure OpenAI, and local models.\n",
|
||||
"AgentChat utilizes model clients provided by the\n",
|
||||
"[`autogen-ext`](../../core-user-guide/framework/model-clients.ipynb) package."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## OpenAI\n",
|
||||
"\n",
|
||||
"To access OpenAI models, you need to install the `openai` extension to use the {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install 'autogen-ext[openai]==0.4.0.dev4'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You will also need to obtain an [API key](https://platform.openai.com/account/api-keys) from OpenAI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_ext.models import OpenAIChatCompletionClient\n",
|
||||
"\n",
|
||||
"opneai_model_client = OpenAIChatCompletionClient(\n",
|
||||
" model=\"gpt-4o-2024-08-06\",\n",
|
||||
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY environment variable set.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To test the model client, you can use the following code:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen_core.components.models import UserMessage\n",
|
||||
"\n",
|
||||
"result = await opneai_model_client.create([UserMessage(content=\"What is the capital of France?\", source=\"user\")])\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Azure OpenAI\n",
|
||||
"\n",
|
||||
"Install the `azure` and `openai` extensions to use the {py:class}`~autogen_ext.models.AzureOpenAIChatCompletionClient`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install 'autogen-ext[openai,azure]==0.4.0.dev4'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities.\n",
|
||||
"For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.\n",
|
||||
"\n",
|
||||
"The following code snippet shows how to use AAD authentication.\n",
|
||||
"The identity used must be assigned the [Cognitive Services OpenAI User](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_ext.models import AzureOpenAIChatCompletionClient\n",
|
||||
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
|
||||
"\n",
|
||||
"# Create the token provider\n",
|
||||
"token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n",
|
||||
"\n",
|
||||
"az_model_client = AzureOpenAIChatCompletionClient(\n",
|
||||
" model=\"{your-azure-deployment}\",\n",
|
||||
" api_version=\"2024-06-01\",\n",
|
||||
" azure_endpoint=\"https://{your-custom-endpoint}.openai.azure.com/\",\n",
|
||||
" azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.\n",
|
||||
" # api_key=\"sk-...\", # For key-based authentication.\n",
|
||||
" model_capabilities={\n",
|
||||
" \"vision\": True,\n",
|
||||
" \"function_calling\": True,\n",
|
||||
" \"json_output\": True,\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Local Models\n",
|
||||
"\n",
|
||||
"We are working on it. Stay tuned!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Swarm"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
Loading…
Reference in New Issue