autogen/website/docs/tutorial/chat-termination.ipynb

357 lines
13 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Terminating Conversations Between Agents\n",
"\n",
"In this chapter, we will explore how to terminate a conversation between AutoGen agents.\n",
"\n",
"_But why is this important?_ Its because in any complex, autonomous workflows it's crucial to know when to stop the workflow. For example, when the task is completed, or perhaps when the process has consumed enough resources and needs to either stop or adopt different strategies, such as user intervention. So AutoGen natively supports several mechanisms to terminate conversations.\n",
"\n",
"How to Control Termination with AutoGen?\n",
"Currently there are two broad mechanism to control the termination of conversations between agents:\n",
"\n",
"1. **Specify parameters in `initiate_chat`**: When initiating a chat, you can define parameters that determine when the conversation should end.\n",
"\n",
"2. **Configure an agent to trigger termination**: When defining individual agents, you can specify parameters that allow agents to terminate of a conversation based on particular (configurable) conditions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Parameters in `initiate_chat`\n",
"In the previous chapter we actually demonstrated this when we used the `max_turns` parameter to limit the number of turns. If we increase `max_turns` to say `3` notice the conversation takes more rounds to terminate:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from autogen import ConversableAgent"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"cathy = ConversableAgent(\n",
" \"cathy\",\n",
" system_message=\"Your name is Cathy and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.9, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")\n",
"\n",
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Cathy, tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Sure, here's one for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Haha, that's a good one, Cathy! Okay, my turn. \n",
"\n",
"Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Haha, that's a great one! A farm is definitely not the place for secrets. Okay, my turn again. \n",
"\n",
"Why couldn't the bicycle stand up by itself?\n",
"\n",
"Because it was two-tired!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"result = joe.initiate_chat(cathy, message=\"Cathy, tell me a joke.\", max_turns=2)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Cathy, tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Sure, here's one for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Haha, that's a good one, Cathy! Okay, my turn. \n",
"\n",
"Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Haha, that's a great one! A farm is definitely not the place for secrets. Okay, my turn again. \n",
"\n",
"Why couldn't the bicycle stand up by itself?\n",
"\n",
"Because it was two-tired!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Haha, that's a wheely good one, Cathy!\n",
"\n",
"Why did the golfer bring two pairs of pants?\n",
"\n",
"In case he got a hole in one!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Haha, that's a perfect swing of a joke!\n",
"\n",
"Why did the scarecrow win an award?\n",
"\n",
"Because he was outstanding in his field!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"result = joe.initiate_chat(\n",
" cathy, message=\"Cathy, tell me a joke.\", max_turns=3\n",
") # increase the number of max turns before termination"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Agent-triggered termination\n",
"You can also terminate a conversation by configuring parameters of an agent.\n",
"Currently, there are two parameters you can configure:\n",
"\n",
"1. `max_consecutive_auto_reply`: This condition triggers termination if the number of automatic responses to the same sender exceeds a threshold. You can customize this using the `max_consecutive_auto_reply` argument of the `ConversableAgent` class. To accomplish this the agent maintains a counter of the number of consecutive automatic responses to the same sender. Note that this counter can be reset because of human intervention. We will describe this in more detail in the next chapter.\n",
"2. `is_termination_msg`: This condition can trigger termination if the _received_ message satisfies a particular condition, e.g., it contains the word \"TERMINATE\". You can customize this condition using the `is_terminate_msg` argument in the constructor of the `ConversableAgent` class."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using `max_consecutive_auto_reply`\n",
"\n",
"In the example below lets set `max_consecutive_auto_reply` to `1` and notice how this ensures that Joe only replies once."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Cathy, tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Sure, here's one for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Haha, that's a good one, Cathy! Okay, my turn. \n",
"\n",
"Why don't we ever tell secrets on a farm?\n",
"\n",
"Because the potatoes have eyes, the corn has ears, and the beans stalk.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Haha, that's a great one! A farm is definitely not the place for secrets. Okay, my turn again. \n",
"\n",
"Why couldn't the bicycle stand up by itself?\n",
"\n",
"Because it was two-tired!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
" max_consecutive_auto_reply=1, # Limit the number of consecutive auto-replies.\n",
")\n",
"\n",
"result = joe.initiate_chat(cathy, message=\"Cathy, tell me a joke.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using `is_termination_msg`\n",
"\n",
"Let's set the termination message to \"GOOD BYE\" and see how the conversation terminates."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mjoe\u001b[0m (to cathy):\n",
"\n",
"Cathy, tell me a joke and then say the words GOOD BYE.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mcathy\u001b[0m (to joe):\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up everything!\n",
"\n",
"GOOD BYE!\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"joe = ConversableAgent(\n",
" \"joe\",\n",
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
" is_termination_msg=lambda msg: \"good bye\" in msg[\"content\"].lower(),\n",
")\n",
"\n",
"result = joe.initiate_chat(cathy, message=\"Cathy, tell me a joke and then say the words GOOD BYE.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Notice how the conversation ended based on contents of cathy's message!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"In this chapter we introduced mechanisms to terminate a conversation between agents.\n",
"You can configure both parameters in `initiate_chat` and also configuration of agents.\n",
"\n",
"That said, it is important to note that when a termination condition is triggered,\n",
"the conversation may not always terminate immediately. The actual termination\n",
"depends on the `human_input_mode` argument of the `ConversableAgent` class.\n",
"For example, when mode is `NEVER` the termination conditions above will end the conversations.\n",
"But when mode is `ALWAYS` or `TERMINATE`, it will not terminate immediately.\n",
"We will describe this behavior and explain why it is important in the next chapter.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}