AutoGen Tutorial (#1702)

* update intro

* update intro

* tutorial

* update notebook

* update notebooks

* update

* merge

* add conversation patterns

* rename; delete unused files.

* Reorganize new guides

* Improve intro, fix typos

* add what is next

* outline for code executor

* initiate chats png

* Improve language

* Improve language of human in the loop tutorial

* update

* update

* Update group chat

* code executor

* update convsersation patterns

* update code executor section to use legacy code executor

* update conversation pattern

* redirect

* update figures

* update whats next

* Break down chapter 2 into two chapters

* udpate

* fix website build

* Minor corrections of typos and grammar.

* remove broken links, update sidebar

* code executor update

* Suggest changes to the code executor section

* update what is next

* reorder

* update getting started

* title

* update navbar

* Delete website/docs/tutorial/what-is-next.ipynb

* update conversable patterns

* Improve language

* Fix typo

* minor fixes

---------

Co-authored-by: Jack Gerrits <jack@jackgerrits.com>
Co-authored-by: gagb <gagb@users.noreply.github.com>
Co-authored-by: Joshua Kim <joshua@spectdata.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
This commit is contained in:
Eric Zhu 2024-03-09 09:45:58 -08:00 committed by GitHub
parent 08e2615322
commit 74298cda2c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 3822 additions and 82 deletions

View File

@ -41,6 +41,7 @@ repos:
pyproject.toml |
website/static/img/ag.svg |
website/yarn.lock |
website/docs/tutorial/code-executors.ipynb |
notebook/.*
)$
- repo: https://github.com/nbQA-dev/nbQA

11
website/.gitattributes vendored Normal file
View File

@ -0,0 +1,11 @@
docs/Tutorial/code_executor_files/figure-markdown_strict/cell-8-output-1.png filter=lfs diff=lfs merge=lfs -text
docs/Tutorial/assets/Human-in-the-loop.png filter=lfs diff=lfs merge=lfs -text
docs/Tutorial/assets/conversable-agent.png filter=lfs diff=lfs merge=lfs -text
docs/Tutorial/.cache/41/cache.db filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/nested-chats.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/sequential-two-agent-chat.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/two-agent-chat.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/code-execution-in-conversation.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/code-executor-docker.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/code-executor-no-docker.png filter=lfs diff=lfs merge=lfs -text
docs/tutorial/assets/group-chat.png filter=lfs diff=lfs merge=lfs -text

5
website/.gitignore vendored
View File

@ -11,8 +11,11 @@ package-lock.json
docs/reference
/docs/notebooks
docs/tutorial/*.mdx
docs/tutorial/**/*.png
!docs/tutorial/assets/*.png
docs/topics/llm_configuration.mdx
docs/topics/code-execution/jupyter-code-executor.mdx
docs/topics/code-execution/*.mdx
# Misc
.DS_Store

View File

@ -1,78 +0,0 @@
# Getting Started
<!-- ### Welcome to AutoGen, a library for enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework! -->
AutoGen is a framework that enables development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
![AutoGen Overview](/img/autogen_agentchat.png)
### Main Features
- AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
- It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These systems span a [wide range of applications](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen) from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
- AutoGen provides [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification). It offers utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.
AutoGen is powered by collaborative [research studies](/docs/Research) from Microsoft, Penn State University, and University of Washington.
### Quickstart
Install from pip: `pip install pyautogen`. Find more options in [Installation](/docs/installation/).
For [code execution](/docs/FAQ#code-execution), we strongly recommend installing the python docker package, and using docker.
#### Multi-Agent Conversation Framework
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),
```python
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
# See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample.json
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
# This initiates an automated chat between the two agents to solve the task
```
The figure below shows an example conversation flow with AutoGen.
![Agent Chat Example](/img/chat_example.png)
* [Code examples](/docs/Examples).
* [Documentation](/docs/Use-Cases/agent_chat).
#### Enhanced LLM Inferences
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
```python
# perform tuning for openai<1
config, analysis = autogen.Completion.tune(
data=tune_data,
metric="success",
mode="max",
eval_func=eval_func,
inference_budget=0.05,
optimization_budget=3,
num_samples=-1,
)
# perform inference for a test instance
response = autogen.Completion.create(context=test_instance, **config)
```
* [Code examples](/docs/Examples).
* [Documentation](/docs/Use-Cases/enhanced_inference).
### Where to Go Next ?
* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference).
* Find [code examples](/docs/Examples).
* Read [SDK](/docs/reference/agentchat/conversable_agent/).
* Learn about [research](/docs/Research) around AutoGen.
* [Roadmap](https://github.com/orgs/microsoft/projects/989/views/3)
* Chat on [Discord](https://discord.gg/pAbnFJrkgZ).
* Follow on [Twitter](https://twitter.com/pyautogen).
If you like our project, please give it a [star](https://github.com/microsoft/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/Contribute).
<iframe src="https://ghbtns.com/github-btn.html?user=microsoft&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>

View File

@ -0,0 +1,138 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Getting Started
AutoGen is a framework that enables development of LLM applications using
multiple agents that can converse with each other to solve tasks. AutoGen agents
are customizable, conversable, and seamlessly allow human participation. They
can operate in various modes that employ combinations of LLMs, human inputs, and
tools.
![AutoGen Overview](/img/autogen_agentchat.png)
### Main Features
- AutoGen enables building next-gen LLM applications based on [multi-agent
conversations](/docs/Use-Cases/agent_chat) with minimal effort. It simplifies
the orchestration, automation, and optimization of a complex LLM workflow. It
maximizes the performance of LLM models and overcomes their weaknesses.
- It supports [diverse conversation
patterns](/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns)
for complex workflows. With customizable and conversable agents, developers can
use AutoGen to build a wide range of conversation patterns concerning
conversation autonomy, the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These
systems span a [wide range of
applications](/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen)
from various domains and complexities. This demonstrates how AutoGen can
easily support diverse conversation patterns.
AutoGen is powered by collaborative [research studies](/docs/Research) from
Microsoft, Penn State University, and University of Washington.
### Quickstart
```sh
pip install pyautogen
```
<Tabs>
<TabItem value="local" label="Local execution" default>
:::warning
When asked, be sure to check the generated code before continuing to ensure it is safe to run.
:::
```python
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor
import os
from pathlib import Path
llm_config = {
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
}
work_dir = Path("coding")
work_dir.mkdir(exist_ok=True)
assistant = AssistantAgent("assistant", llm_config=llm_config)
code_executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
user_proxy = UserProxyAgent(
"user_proxy", code_execution_config={"executor": code_executor}
)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Plot a chart of NVDA and TESLA stock price change YTD.",
)
```
</TabItem>
<TabItem value="docker" label="Docker execution" default>
```python
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import DockerCommandLineCodeExecutor
import os
from pathlib import Path
llm_config = {
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
}
work_dir = Path("coding")
work_dir.mkdir(exist_ok=True)
with DockerCommandLineCodeExecutor(work_dir=work_dir) as code_executor:
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent(
"user_proxy", code_execution_config={"executor": code_executor}
)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Plot a chart of NVDA and TESLA stock price change YTD. Save the plot to a file called plot.png",
)
```
Open `coding/plot.png` to see the generated plot.
</TabItem>
</Tabs>
:::tip
Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).
:::
#### Multi-Agent Conversation Framework
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),
The figure below shows an example conversation flow with AutoGen.
![Agent Chat Example](/img/chat_example.png)
### Where to Go Next?
* Go through the [tutorial](/docs/tutorial/introduction) to learn more about the core concepts in AutoGen
* Read the examples and guides in the [notebooks section](/docs/notebooks)
* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference)
* Read the [API](/docs/reference/agentchat/conversable_agent/) docs
* Learn about [research](/docs/Research) around AutoGen
* Chat on [Discord](https://discord.gg/pAbnFJrkgZ)
* Follow on [Twitter](https://twitter.com/pyautogen)
If you like our project, please give it a [star](https://github.com/microsoft/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/Contribute).
<iframe src="https://ghbtns.com/github-btn.html?user=microsoft&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f07f62e09015f13a254ebc1dbfe92c7209608ed4888ab28722b4f39dad575058
size 27871

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:41ab19925fe674924fadbcbe86fc3b4aee90273164b91ea5075b01d86793559f
size 45857

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ca20b9ff0d84e3852f0ca09147c33bd83436d0aebdcb8d92a97c877e3700296
size 41216

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88ef4b0853b175c02cdb029997a731c396eb91ed5f9e5dc671cb2403169118f9
size 30650

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:43763dfc90485163e3937e6261f2851da678198940d571fc5731e9e58cf3188b
size 90369

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:abe6ced78aeaf839ef1165c35d23e8221b3febb10f9201f13d283976a3fac42a
size 38796

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:37fe62859b8b25f29920a7eef69d5fc2cc583da8c91dbf3e0394d19d52f35ef3
size 86429

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dcf72e7277451fe71121a05f754b7ae8fb82e6e0f124a6c03fbeea6d5e636856
size 59877

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ccdc58ef6a99562603c6daddcf74b0b168710ed8322dc35be61190e16774eec
size 47757

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,558 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Allowing Human Feedback in Agents\n",
"\n",
"In the last two chapters we introduced the `ConversableAgent` class and showed how you can use it to create autonomous (`human_input_mode=NEVER`) agents that can accomplish tasks. We also showed how to properly terminate a conversation between agents.\n",
"\n",
"But many applications may require putting humans in-the-loop with agents. For example, to allow human feedback to steer agents in the right direction, specify goals, etc. In this chapter, we will show how AutoGen supports human intervention."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Human Input Modes\n",
"\n",
"Currently AutoGen supports three modes for human input. The mode is specified through\n",
"the `human_input_mode` argument of the `ConversableAgent`. The three modes are:\n",
"\n",
"1. `NEVER`: human input is never requested.\n",
"2. `TERMINATE` (default): human input is only requested when a termination condition is\n",
" met. Note that in this mode if the human chooses to intercept and reply, the conversation continues\n",
" and the counter used by `max_consectuive_auto_reply` is reset.\n",
"3. `ALWAYS`: human input is always requested and the human can choose to skip and trigger an auto-reply,\n",
" intercept and provide feedback, or terminate the conversation. Note that in this mode\n",
" termination based on `max_consecutive_auto_reply` is ignored.\n",
"\n",
"The previous chapters already showed many examples of the cases when `human_input_mode` is `NEVER`. \n",
"Below we show one such example again and then show the differences when this mode is set to `ALWAYS` and `NEVER` instead."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from autogen import ConversableAgent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Human Input Mode = `NEVER`\n",
"\n",
"In this mode, human input is never requested and the termination conditions\n",
"are used to terminate.\n",
"This mode is useful when you want your agents to act fully autonomously.\n",
"\n",
"Here is an example of using this mode to run a simple guess-a-number game between\n",
"two agents, with the maximum number of guesses set to 5 through\n",
"`max_consecutive_auto_reply`, and the termination message set to check for the \n",
"number that is the correct guess."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"I have a number between 1 and 100. Guess it!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"50\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"75\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"62\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"56\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"53\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"agent_with_number = ConversableAgent(\n",
" \"agent_with_number\",\n",
" system_message=\"You are playing a game of guess-my-number. You have the \"\n",
" \"number 53 in your mind, and I will try to guess it. \"\n",
" \"If I guess too high, say 'too high', if I guess too low, say 'too low'. \",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}]},\n",
" is_termination_msg=lambda msg: \"53\" in msg[\"content\"], # terminate if the number is guessed by the other agent\n",
" human_input_mode=\"NEVER\", # never ask for human input\n",
")\n",
"\n",
"agent_guess_number = ConversableAgent(\n",
" \"agent_guess_number\",\n",
" system_message=\"I have a number in my mind, and you will try to guess it. \"\n",
" \"If I say 'too high', you should guess a lower number. If I say 'too low', \"\n",
" \"you should guess a higher number. \",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}]},\n",
" human_input_mode=\"NEVER\",\n",
")\n",
"\n",
"result = agent_with_number.initiate_chat(\n",
" agent_guess_number,\n",
" message=\"I have a number between 1 and 100. Guess it!\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Yay! The game is over. The guessing agent got the number correctly in exactly\n",
"5 guesses using binary search -- very efficient!\n",
"You can see that the conversation was terminated after the 5th guess which\n",
"triggered both the counter-based and message-based termination conditions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Human Input Mode = `ALWAYS`\n",
"\n",
"In this mode, human input is always requested and the human can choose to skip,\n",
"intersecpt, or terminate the conversation.\n",
"Let us see this mode in action by playing the same game as before with the agent with the number, but this time\n",
"participating in the game as a human.\n",
"We will be the agent that is guessing the number, and play against the agent\n",
"with the number from before."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mhuman_proxy\u001b[0m (to agent_with_number):\n",
"\n",
"10\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to human_proxy):\n",
"\n",
"Too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mhuman_proxy\u001b[0m (to agent_with_number):\n",
"\n",
"70\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to human_proxy):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mhuman_proxy\u001b[0m (to agent_with_number):\n",
"\n",
"50\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to human_proxy):\n",
"\n",
"Too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mhuman_proxy\u001b[0m (to agent_with_number):\n",
"\n",
"53\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"human_proxy = ConversableAgent(\n",
" \"human_proxy\",\n",
" llm_config=False, # no LLM used for human proxy\n",
" human_input_mode=\"ALWAYS\", # always ask for human input\n",
")\n",
"\n",
"# Start a chat with the agent with number with an initial guess.\n",
"result = human_proxy.initiate_chat(\n",
" agent_with_number, # this is the same agent with the number as before\n",
" message=\"10\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you run the code above, you will be prompt to enter a response\n",
"each time it is your turn to speak. You can see the human in the conversation\n",
"was not very good at guessing the number... but hey the agent was nice enough\n",
"to give out the number in the end."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Human Input Mode = `TERMINATE`\n",
"\n",
"In this mode, human input is only requested when a termination condition is\n",
"met. **If the human choose to intercept and reply, the counter will be reset**; if \n",
"the human choose to skip, automatic reply mechanism will be used; if the human\n",
"choose to terminate, the conversation will be terminated.\n",
"\n",
"Let us see this mode in action by playing the same game again, but this time\n",
"the agent with the number will choose a new number and the game will restart."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"I have a number between 1 and 100. Guess it!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"50\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"75\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"62\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"56\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"53\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Correct! You've guessed my number.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Great! This was a fun game. Let's play again sometime.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"10\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Your guess is too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Okay, let's try 5.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Your guess is too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Okay, how about 2?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Your guess is too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Alright, then it must be 3.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Yes, correct! The number was 3. Good job guessing it!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Thank you! That was indeed a fun game. I look forward to playing again with you.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"I'm glad you enjoyed it! I'm always here for a good game. Don't hesitate to come back when you want to play again.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Surely, I will. This was fun. Thanks for playing with me. See you next time!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"You're welcome! I'm glad you had fun. Looking forward to our next game. See you next time!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Lets play again\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Sure! I'll think of a number between 1 and 100. Let's start guessing!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"I'll start with 50.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"The guess is too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Okay, let's try 75.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"The guess is too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"How about 62?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"The guess is too low.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Then, let's try 68.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"The guess is too high.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33magent_with_number\u001b[0m (to agent_guess_number):\n",
"\n",
"Okay, how about 65?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33magent_guess_number\u001b[0m (to agent_with_number):\n",
"\n",
"Yes, correct! The number was 65. Good job guessing it!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"agent_with_number = ConversableAgent(\n",
" \"agent_with_number\",\n",
" system_message=\"You are playing a game of guess-my-number. \"\n",
" \"In the first game, you have the \"\n",
" \"number 53 in your mind, and I will try to guess it. \"\n",
" \"If I guess too high, say 'too high', if I guess too low, say 'too low'. \",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}]},\n",
" max_consecutive_auto_reply=5,\n",
" human_input_mode=\"TERMINATE\", # ask for human input until the game is terminated\n",
")\n",
"\n",
"agent_guess_number = ConversableAgent(\n",
" \"agent_guess_number\",\n",
" system_message=\"I have a number in my mind, and you will try to guess it. \"\n",
" \"If I say 'too high', you should guess a lower number. If I say 'too low', \"\n",
" \"you should guess a higher number. \",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}]},\n",
" human_input_mode=\"NEVER\",\n",
")\n",
"\n",
"result = agent_with_number.initiate_chat(\n",
" agent_guess_number,\n",
" message=\"I have a number between 1 and 100. Guess it!\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the previous conversation, we were asked to provide human input first time\n",
"when the counter-based termination condition was met. We intercepted the\n",
"message and replied \"Let's play again\". The conversation continued and the\n",
"counter was reset. The game continued until the counter-based termination\n",
"condition was met again, and this time we chose to skip. The conversation\n",
"continued again but human input was requested at every turn since, and we\n",
"chose to skip each time until the game was over."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"In this chapter, we showed you how to use the human-in-the-loop component\n",
"to provide human feedback to agent and to terminate conversation.\n",
"We also showed you the different human input modes and how they affect\n",
"the behavior of the human-in-the-loop component.\n",
"\n",
"The next chapter will be all about code executor -- the most powerful\n",
"component second only to LLMs."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "autogen",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,259 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to AutoGen\n",
"\n",
"Welcome! AutoGen is an open-source framework that leverages multiple _agents_ to enable complex workflows. This tutorial introduces basic concepts and building blocks of AutoGen."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Why AutoGen?\n",
"\n",
"> _The whole is greater than the sum of its parts._<br/>\n",
"> -**Aristotle**\n",
"\n",
"While there are many definitions of agents, in AutoGen, an agent is an entity that reacts to its environment. This abstraction not only allows agents to model real-world and abstract entities, such as people and algorithms, but it also simplifies implementation of complex workflows as collaboration among agents.\n",
"\n",
"Further, AutoGen is extensible and composable: you can extend a simple agent with customizable components and create workflows that can combine these agents, resulting in implementations that are modular and easy to maintain.\n",
"\n",
"Most importantly, AutoGen is developed by a vibrant community of researchers\n",
"and engineers. It incorporates the latest research in multi-agent systems\n",
"and has been used in many real-world applications, including math problem solvers,\n",
"supply chain optimization, data analysis, market research, and gaming."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"The simplest way to install AutoGen is from pip: `pip install pyautogen`. Find more options in [Installation](/docs/installation/)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agents\n",
"\n",
"In AutoGen, an agent is an entity that can send and receive messages to and from\n",
"other agents in its environment. An agent can be powered by models (such as a large language model\n",
"like GPT-4), code executors (such as an IPython kernel), human, or a combination of these\n",
"and other pluggable and customizable components.\n",
"\n",
"```{=mdx}\n",
"![ConversableAgent](./assets/conversable-agent.png)\n",
"```\n",
"\n",
"An example of such agents is the built-in `ConversableAgent` which supports the following components:\n",
"\n",
"1. A list of LLMs\n",
"2. A code executor\n",
"3. A function and tool executor\n",
"4. A component for keeping human-in-the-loop\n",
"\n",
"You can switch each component on or off and customize it to suit the need of \n",
"your application. You even can add additional components to the agent's capabilities."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"LLMs, for example, enable agents to converse in natural languages and transform between structured and unstructured text. \n",
"The following example shows a `ConversableAgent` with a GPT-4 LLM switched on and other\n",
"components switched off:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from autogen import ConversableAgent\n",
"\n",
"agent = ConversableAgent(\n",
" \"chatbot\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" code_execution_config=False, # Turn off code execution, by default it is off.\n",
" function_map=None, # No registered functions, by default it is None.\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `llm_config` argument contains a list of configurations for the LLMs.\n",
"See [LLM Configuration](/docs/topics/llm_configuration) for more details."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can ask this agent to generate a response to a question using the `generate_reply` method:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's one for you:\n",
"\n",
"Why don't scientists trust atoms? \n",
"\n",
"Because they make up everything!\n"
]
}
],
"source": [
"reply = agent.generate_reply(messages=[{\"content\": \"Tell me a joke.\", \"role\": \"user\"}])\n",
"print(reply)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Roles and Conversations\n",
"\n",
"In AutoGen, you can assign roles to agents and have them participate in conversations or chat with each other. A conversation is a sequence of messages exchanged between agents. You can then use these conversations to make progress on a task. For example, in the example below, we assign different roles to two agents by setting their\n",
"`system_message`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"cathy = ConversableAgent(\n",
" \"cathy\",\n",
" system_message=\"Your name is Cathy and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.9, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")\n",
"\n",
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have two comedian agents, we can ask them to start a comedy show.\n",
"This can be done using the `initiate_chat` method.\n",
"We set the `max_turns` to 2 to keep the conversation short."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Sure, here's a classic one for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"That's a great one, Joe! Here's my turn:\n",
"\n",
"Why don't some fish play piano?\n",
"\n",
"Because you can't tuna fish!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Haha, good one, Cathy! I have another:\n",
"\n",
"Why was the math book sad?\n",
"\n",
"Because it had too many problems!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"result = joe.initiate_chat(cathy, message=\"Tell me a joke.\", max_turns=2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The comedians are bouncing off each other!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"In this chapter, we introduced the concept of agents, roles and conversations in AutoGen.\n",
"For simplicity, we only used LLMs and created fully autonomous agents (`human_input_mode` was set to `NEVER`). \n",
"In the next chapter, \n",
"we will show how you can control when to _terminate_ a conversation between autonomous agents."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "autogen",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,343 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Terminating Conversations Between Agents\n",
"\n",
"In this chapter, we will explore how to terminate a conversation between AutoGen agents.\n",
"\n",
"_But why is this important?_ Its because in any complex, autonomous workflows it's crucial to know when to stop the workflow. For example, when the task is completed, or perhaps when the process has consumed enough resources and needs to either stop or adopt different strategies, such as user intervention. So AutoGen natively supports several mechanisms to terminate conversations.\n",
"\n",
"How to Control Termination with AutoGen?\n",
"Currently there are two broad mechanism to control the termination of conversations between agents:\n",
"\n",
"1. **Specify parameters in `initiate_chat`**: When initiating a chat, you can define parameters that determine when the conversation should end.\n",
"\n",
"2. **Configure an agent to trigger termination**: When defining individual agents, you can specify parameters that allow agents to terminate of a conversation based on particular (configurable) conditions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Parameters in `initiate_chat`\n",
"In the previous chapter we actually demonstrated this when we used the `max_turns` parameter to limit the number of turns. If we increase `max_turns` to say `3` notice the conversation takes more rounds to terminate:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from autogen import ConversableAgent"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"cathy = ConversableAgent(\n",
" \"cathy\",\n",
" system_message=\"Your name is Cathy and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.9, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")\n",
"\n",
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Great one, Joe! Here's one to add to our routine. Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Oh, Cathy, that's a good one! I love it. Let's keep the comedy rolling. How about this one? \n",
"\n",
"Why don't some fish play piano?\n",
"\n",
"Because you can't tuna fish!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"result = joe.initiate_chat(cathy, message=\"Tell me a joke.\", max_turns=2)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Great one, Joe! Here's one to add to our routine. Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Oh, Cathy, that's a good one! I love it. Let's keep the comedy rolling. How about this one? \n",
"\n",
"Why don't some fish play piano?\n",
"\n",
"Because you can't tuna fish!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Hah, classic! Alright, here's my comeback. \n",
"\n",
"Why don't skeletons fight each other?\n",
"\n",
"Because they don't have the guts!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Oh Cathy, you really have a funny bone! Well here's mine:\n",
"\n",
"What do you call a snowman with a six-pack?\n",
"\n",
"An abdominal snowman!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"result = joe.initiate_chat(\n",
" cathy, message=\"Tell me a joke.\", max_turns=3\n",
") # increase the number of max turns before termination"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Agent-triggered termination\n",
"You can also terminate a conversation by configuring parameters of an agent.\n",
"Currently, there are two parameters you can configure:\n",
"\n",
"1. `max_consecutive_auto_reply`: This condition trigger termination if the number of automatic responses to the same sender exceeds a threshold. You can customize this using the `max_consecutive_auto_reply` argument of the `ConversableAgent` class. To accomplish this the agent maintains a counter of the number of consecutive automatic responses to the same sender. Note that this counter can be reset because of human intervention. We will describe this in more detail in the next chapter.\n",
"2. `is_termination_msg`: This condition can trigger termination if the _received_ message satisfies a particular condition, e.g., it contains the word \"TERMINATE\". You can customize this condition using the `is_terminate_msg` argument in the constructor of the `ConversableAgent` class."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using `max_consecutive_auto_reply`\n",
"\n",
"In the example below lets set `max_consecutive_auto_reply` to `1` and notice how this ensures that Joe only replies once."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Great one, Joe! Here's one to add to our routine. Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Oh, Cathy, that's a good one! I love it. Let's keep the comedy rolling. How about this one? \n",
"\n",
"Why don't some fish play piano?\n",
"\n",
"Because you can't tuna fish!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
" max_consecutive_auto_reply=1, # Limit the number of consecutive auto-replies.\n",
")\n",
"\n",
"result = joe.initiate_chat(cathy, message=\"Tell me a joke.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using `is_termination_msg`\n",
"\n",
"Let's set the termination message to \"GOOD BYE\" and see how the conversation terminates."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Tell me a joke and then say the words GOOD BYE.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"GOOD BYE!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
" is_termination_msg=lambda msg: \"good bye\" in msg[\"content\"].lower(),\n",
")\n",
"\n",
"result = joe.initiate_chat(cathy, message=\"Tell me a joke and then say the words GOOD BYE.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Notice how the conversation ended based on contents of cathy's message!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"In this chapter we introduced mechanisms to terminate a conversation between agents.\n",
"You can configure both parameters in `initiate_chat` and also configuration of agents.\n",
"\n",
"That said, it is important to note that when a termination condition is triggered,\n",
"the conversation may not always terminated immediately. The actual termination\n",
"depends on the `human_input_mode` argument of the `ConversableAgent` class.\n",
"For example, when mode is `NEVER` the termination conditions above will end the conversations.\n",
"But when mode is `ALWAYS` or `TERMINATE`, it will not terminate immediately\n",
"we describe this behavior and why its important in the next chapter.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,40 @@
# What is Next?
Now that you have learned the basics of AutoGen, you can start to build your own
agents. Here are some ideas to get you started without going to the advanced
topics:
1. **Chat with LLMs**: In [Human in the Loop](./human-in-the-loop) we covered
the basic human-in-the-loop usage. You can try to hook up different LLMs
using proxy servers like [Ollama](https://github.com/ollama/ollama), and
chat with them using the human-in-the-loop component of your human proxy
agent.
2. **Prompt Engineering**: In [Code Executors](./code-executors) we
covered the simple two agent scenario using GPT-4 and Python code executor.
To make this scenario work for different LLMs and programming languages, you
probably need to tune the system message of the code writer agent. Same with
other scenarios that we have covered in this tutorial, you can also try to
tune system messages for different LLMs.
3. **Complex Tasks**: In [ConversationPatterns](./conversation-patterns)
we covered the basic conversation patterns. You can try to find other tasks
that can be decomposed into these patterns, and leverage the code executors
to make the agents more powerful.
## Dig Deeper
- Read the [topic docs](/docs/topics) to learn more
- Read the examples and guides in the [notebooks section](/docs/notebooks)
## Get Help
If you have any questions, you can ask in our [GitHub
Discussions](https://github.com/microsoft/autogen/discussions), or join
our [Discord Server](https://discord.gg/pAbnFJrkgZ).
[![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://discord.gg/pAbnFJrkgZ)
## Get Involved
- Contribute your work to our [gallery](../Gallery)
- Follow our [contribution guide](../Contribute) to make a pull request to AutoGen
- You can also share your work with the community on the Discord server.

View File

@ -73,7 +73,7 @@ def notebooks_target_dir(website_directory: Path) -> Path:
def load_metadata(notebook: Path) -> typing.Dict:
content = json.load(notebook.open())
content = json.load(notebook.open(encoding="utf-8"))
return content["metadata"]

View File

@ -11,7 +11,43 @@
module.exports = {
docsSidebar: [
'Getting-Started',
'Getting-Started',
{
type: 'category',
label: 'Tutorial',
items: [
{
type: 'doc',
id: 'tutorial/introduction',
label: 'Introduction',
},
{
type: 'doc',
id: 'tutorial/termination',
label: 'Termination',
},
{
type: 'doc',
id: 'tutorial/human-in-the-loop',
label: 'Human in the Loop',
},
{
type: 'doc',
id: 'tutorial/code-executors',
label: 'Code Executors',
},
{
type: 'doc',
id: 'tutorial/conversation-patterns',
label: 'Conversation Patterns',
},
{
type: 'doc',
id: 'tutorial/what-is-next',
label: 'What is Next?',
}
],
},
{
type: "category",
label: "Installation",
@ -22,7 +58,16 @@
id: "installation/Installation"
},
},
{'Topics': [{type: 'autogenerated', dirName: 'topics'}]},
{
type: 'category',
label: 'Topics',
link: {
type: 'generated-index',
title: 'Topics',
description: 'Learn about various topics in AutoGen',
slug: 'topics',
},
items: [{type: 'autogenerated', dirName: 'topics'}]},
{'Use Cases': [{type: 'autogenerated', dirName: 'Use-Cases'}]},
'Contribute',
'Research',