mirror of https://github.com/microsoft/autogen.git
Update more notebooks to be available on the website (#1890)
* Update more notebooks to be available on the website * fix notebook * update link
This commit is contained in:
parent
c75655a340
commit
00e097d4a5
|
@ -4,19 +4,20 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Handling A Long Context via `TransformChatHistory`\n",
|
"# Handling A Long Context via `TransformChatHistory`\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook illustrates how you can use the `TransformChatHistory` capability to give any `Conversable` agent an ability to handle a long context. "
|
"This notebook illustrates how you can use the `TransformChatHistory` capability to give any `Conversable` agent an ability to handle a long context. \n",
|
||||||
]
|
"\n",
|
||||||
},
|
"````{=mdx}\n",
|
||||||
{
|
":::info Requirements\n",
|
||||||
"cell_type": "code",
|
"Install `pyautogen`:\n",
|
||||||
"execution_count": 1,
|
"```bash\n",
|
||||||
"metadata": {},
|
"pip install pyautogen\n",
|
||||||
"outputs": [],
|
"```\n",
|
||||||
"source": [
|
"\n",
|
||||||
"## Uncomment to install pyautogen if you don't have it already\n",
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
"#! pip install pyautogen"
|
":::\n",
|
||||||
|
"````"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -45,6 +46,12 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
"````{=mdx}\n",
|
||||||
|
":::tip\n",
|
||||||
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
|
":::\n",
|
||||||
|
"````\n",
|
||||||
|
"\n",
|
||||||
"To add this ability to any agent, define the capability and then use `add_to_agent`."
|
"To add this ability to any agent, define the capability and then use `add_to_agent`."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
@ -652,6 +659,13 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"description": "Use the TransformChatHistory capability to handle long contexts",
|
||||||
|
"tags": [
|
||||||
|
"long context handling",
|
||||||
|
"capability"
|
||||||
|
]
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
@ -667,7 +681,7 @@
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.10.13"
|
"version": "3.11.7"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
|
|
@ -5,15 +5,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_chess.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"# Chess Game Playing While Chitchatting by GPT-4 Agents\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Auto Generated Agent Chat: Chess Game Playing While Chitchatting by GPT-4 Agents\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
||||||
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
||||||
|
@ -22,10 +14,17 @@
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||||
|
"\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install pyautogen\n",
|
"pip install pyautogen chess\n",
|
||||||
"```"
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
|
":::\n",
|
||||||
|
"````"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -35,16 +34,13 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"%%capture --no-stderr\n",
|
"%%capture --no-stderr\n",
|
||||||
"# %pip install \"pyautogen>=0.2.3\"\n",
|
|
||||||
"from collections import defaultdict\n",
|
"from collections import defaultdict\n",
|
||||||
"from typing import Any, Dict, List, Optional, Union\n",
|
"from typing import Any, Dict, List, Optional, Union\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import chess\n",
|
"import chess\n",
|
||||||
"import chess.svg\n",
|
"import chess.svg\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import autogen\n",
|
"import autogen"
|
||||||
"\n",
|
|
||||||
"%pip install chess -U"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -68,20 +64,7 @@
|
||||||
" filter_dict={\n",
|
" filter_dict={\n",
|
||||||
" \"model\": [\"gpt-4\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
|
" \"model\": [\"gpt-4\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
|
||||||
" },\n",
|
" },\n",
|
||||||
")\n",
|
")"
|
||||||
"# config_list_gpt35 = autogen.config_list_from_json(\n",
|
|
||||||
"# \"OAI_CONFIG_LIST\",\n",
|
|
||||||
"# filter_dict={\n",
|
|
||||||
"# \"model\": {\n",
|
|
||||||
"# \"gpt-3.5-turbo\",\n",
|
|
||||||
"# \"gpt-3.5-turbo-16k\",\n",
|
|
||||||
"# \"gpt-3.5-turbo-16k-0613\",\n",
|
|
||||||
"# \"gpt-3.5-turbo-0301\",\n",
|
|
||||||
"# \"chatgpt-35-turbo-0301\",\n",
|
|
||||||
"# \"gpt-35-turbo-v0301\",\n",
|
|
||||||
"# },\n",
|
|
||||||
"# },\n",
|
|
||||||
"# )"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -89,33 +72,11 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
|
"````{=mdx}\n",
|
||||||
"\n",
|
":::tip\n",
|
||||||
"The config list looks like the following:\n",
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
"```python\n",
|
":::\n",
|
||||||
"config_list = [\n",
|
"````"
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your OpenAI API key here>',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
"]\n",
|
|
||||||
"```\n",
|
|
||||||
"\n",
|
|
||||||
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -996,6 +957,10 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"tags": ["chess"],
|
||||||
|
"description": "Use AutoGen to create two agents that are able to play chess"
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "flaml",
|
"display_name": "flaml",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
|
@ -4,14 +4,7 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_compression.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"# Conversations with Chat History Compression Enabled\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Auto Generated Agent Chat: Conversations with Chat History Compression Enabled\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**CompressibleAgent will be deprecated.** \n",
|
"**CompressibleAgent will be deprecated.** \n",
|
||||||
"\n",
|
"\n",
|
||||||
|
@ -22,9 +15,11 @@
|
||||||
"In this notebook, we demonstrate how to enable compression of history messages using the `CompressibleAgent`. While this agent retains all the default functionalities of the `AssistantAgent`, it also provides the added feature of compression when activated through the `compress_config` setting.\n",
|
"In this notebook, we demonstrate how to enable compression of history messages using the `CompressibleAgent`. While this agent retains all the default functionalities of the `AssistantAgent`, it also provides the added feature of compression when activated through the `compress_config` setting.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Different compression modes are supported:\n",
|
"Different compression modes are supported:\n",
|
||||||
|
"\n",
|
||||||
"1. `compress_config=False` (Default): `CompressibleAgent` is equivalent to `AssistantAgent`.\n",
|
"1. `compress_config=False` (Default): `CompressibleAgent` is equivalent to `AssistantAgent`.\n",
|
||||||
"2. `compress_config=True` or `compress_config={\"mode\": \"TERMINATE\"}`: no compression will be performed. However, we will count token usage before sending requests to the OpenAI model. The conversation will be terminated directly if the total token usage exceeds the maximum token usage allowed by the model (to avoid the token limit error from OpenAI API).\n",
|
"2. `compress_config=True` or `compress_config={\"mode\": \"TERMINATE\"}`: no compression will be performed. However, we will count token usage before sending requests to the OpenAI model. The conversation will be terminated directly if the total token usage exceeds the maximum token usage allowed by the model (to avoid the token limit error from OpenAI API).\n",
|
||||||
"3. `compress_config={\"mode\": \"COMPRESS\", \"trigger_count\": <your pre-set number>}, \"leave_last_n\": <your pre-set number>`: compression is enabled.\n",
|
"3. `compress_config={\"mode\": \"COMPRESS\", \"trigger_count\": <your pre-set number>, \"leave_last_n\": <your pre-set number>}`: compression is enabled.\n",
|
||||||
|
"\n",
|
||||||
" ```python\n",
|
" ```python\n",
|
||||||
" # default compress_config\n",
|
" # default compress_config\n",
|
||||||
" compress_config = {\n",
|
" compress_config = {\n",
|
||||||
|
@ -38,12 +33,13 @@
|
||||||
" \"verbose\": False, # if True, print out the content to be compressed and the compressed content\n",
|
" \"verbose\": False, # if True, print out the content to be compressed and the compressed content\n",
|
||||||
" }\n",
|
" }\n",
|
||||||
" ```\n",
|
" ```\n",
|
||||||
|
"\n",
|
||||||
" Currently, our compression logic is as follows:\n",
|
" Currently, our compression logic is as follows:\n",
|
||||||
" 1. We will always leave the first user message (as well as system prompts) and compress the rest of the history messages.\n",
|
" 1. We will always leave the first user message (as well as system prompts) and compress the rest of the history messages.\n",
|
||||||
" 2. You can choose to not compress the last n messages in the history with \"leave_last_n\".\n",
|
" 2. You can choose to not compress the last n messages in the history with \"leave_last_n\".\n",
|
||||||
" 2. The summary is performed on a per-message basis, with the role of the messages (See compressed content in the example below).\n",
|
" 2. The summary is performed on a per-message basis, with the role of the messages (See compressed content in the example below).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"4. `compress_config={\"mode\": \"CUSTOMIZED\", \"compress_function\": <A customized function for compression>}`: the `compress_function` function will be called on trigger count. The function should accept a list of messages as input and return a tuple of (is_success: bool, compressed_messages: List[Dict]). The whole message history (except system prompt) will be passed.\n",
|
"4. `compress_config={\"mode\": \"CUSTOMIZED\", \"compress_function\": <A customized function for compression>}t`: the `compress_function` function will be called on trigger count. The function should accept a list of messages as input and return a tuple of (is_success: bool, compressed_messages: List[Dict]). The whole message history (except system prompt) will be passed.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"By adjusting `trigger_count`, you can decide when to compress the history messages based on existing tokens. If this is a float number between 0 and 1, it is interpreted as a ratio of max tokens allowed by the model. For example, the AssistantAgent uses gpt-4 with max tokens 8192, the trigger_count = 0.7 * 8192 = 5734.4 -> 5734. Do not set `trigger_count` to the max tokens allowed by the model, since the same LLM is employed for compression and it needs tokens to generate the compressed content. \n",
|
"By adjusting `trigger_count`, you can decide when to compress the history messages based on existing tokens. If this is a float number between 0 and 1, it is interpreted as a ratio of max tokens allowed by the model. For example, the AssistantAgent uses gpt-4 with max tokens 8192, the trigger_count = 0.7 * 8192 = 5734.4 -> 5734. Do not set `trigger_count` to the max tokens allowed by the model, since the same LLM is employed for compression and it needs tokens to generate the compressed content. \n",
|
||||||
|
@ -56,19 +52,16 @@
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Install `pyautogen`:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install pyautogen\n",
|
"pip install pyautogen\n",
|
||||||
"```"
|
"```\n",
|
||||||
]
|
"\n",
|
||||||
},
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
{
|
":::\n",
|
||||||
"cell_type": "code",
|
"````"
|
||||||
"execution_count": 1,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# %pip install pyautogen~=0.1.0"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -105,35 +98,11 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
|
"````{=mdx}\n",
|
||||||
"\n",
|
":::tip\n",
|
||||||
"The config list looks like the following:\n",
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
"```python\n",
|
":::\n",
|
||||||
"config_list = [\n",
|
"````"
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your OpenAI API key here>',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
"]\n",
|
|
||||||
"```\n",
|
|
||||||
"\n",
|
|
||||||
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
|
|
||||||
"\n",
|
|
||||||
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -884,6 +853,10 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"description": "Learn about the CompressibleAgent",
|
||||||
|
"tags": []
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "msft",
|
"display_name": "msft",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
|
@ -1,13 +1,5 @@
|
||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_custom_model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
@ -25,26 +17,17 @@
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||||
|
"\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install pyautogen torch transformers sentencepiece\n",
|
"pip install pyautogen torch transformers sentencepiece\n",
|
||||||
"```"
|
"```\n",
|
||||||
]
|
"\n",
|
||||||
},
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
{
|
":::\n",
|
||||||
"cell_type": "code",
|
"````"
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"execution": {
|
|
||||||
"iopub.execute_input": "2023-02-13T23:40:52.317406Z",
|
|
||||||
"iopub.status.busy": "2023-02-13T23:40:52.316561Z",
|
|
||||||
"iopub.status.idle": "2023-02-13T23:40:52.321193Z",
|
|
||||||
"shell.execute_reply": "2023-02-13T23:40:52.320628Z"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# %pip install pyautogen~=0.2.0b4 torch git+https://github.com/huggingface/transformers sentencepiece"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -455,6 +438,12 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"description": "Define and laod a custom model",
|
||||||
|
"tags": [
|
||||||
|
"custom model"
|
||||||
|
]
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
|
@ -5,35 +5,23 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_groupchat_research.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"# Perform Research with Multi-Agent Group Chat\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Auto Generated Agent Chat: Performs Research with Multi-Agent Group Chat\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
||||||
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Install `pyautogen`:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install pyautogen\n",
|
"pip install pyautogen\n",
|
||||||
"```"
|
"```\n",
|
||||||
]
|
"\n",
|
||||||
},
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
{
|
":::\n",
|
||||||
"cell_type": "code",
|
"````"
|
||||||
"execution_count": 1,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"%%capture --no-stderr\n",
|
|
||||||
"# %pip install \"pyautogen>=0.2.3\""
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -67,33 +55,11 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
|
"````{=mdx}\n",
|
||||||
"\n",
|
":::tip\n",
|
||||||
"The config list looks like the following:\n",
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
"```python\n",
|
":::\n",
|
||||||
"config_list = [\n",
|
"````"
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k',\n",
|
|
||||||
" 'api_key': '<your OpenAI API key here>',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k-0314',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
"]\n",
|
|
||||||
"```\n",
|
|
||||||
"\n",
|
|
||||||
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -548,6 +514,10 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"tags": ["group chat"],
|
||||||
|
"description": "Perform research using a group chat with a number of specialized agents"
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "flaml",
|
"display_name": "flaml",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
|
@ -1,13 +1,5 @@
|
||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
@ -21,24 +13,21 @@
|
||||||
"\n",
|
"\n",
|
||||||
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
|
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](../test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
|
"This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install the [teachable] option.\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||||
|
"\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install \"pyautogen[teachable]\"\n",
|
"pip install pyautogen[teachable]\n",
|
||||||
"```"
|
"```\n",
|
||||||
]
|
"\n",
|
||||||
},
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
{
|
":::\n",
|
||||||
"cell_type": "code",
|
"````"
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"%%capture --no-stderr\n",
|
|
||||||
"# %pip install \"pyautogen[teachable]\""
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -85,39 +74,11 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). After application of the filter shown above, only the gpt-4 models are considered.\n",
|
"````{=mdx}\n",
|
||||||
"\n",
|
":::tip\n",
|
||||||
"The config list may look like the following:\n",
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
"```python\n",
|
":::\n",
|
||||||
"config_list = [\n",
|
"````"
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-1106-preview',\n",
|
|
||||||
" 'api_key': '<your OpenAI API key here>',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your OpenAI API key here>',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" 'model': 'gpt-4-32k',\n",
|
|
||||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
|
||||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
|
||||||
" 'api_type': 'azure',\n",
|
|
||||||
" 'api_version': '2024-02-15-preview',\n",
|
|
||||||
" },\n",
|
|
||||||
"]\n",
|
|
||||||
"```\n",
|
|
||||||
"\n",
|
|
||||||
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
|
|
||||||
"\n",
|
|
||||||
"You can set the value of config_list in other ways if you prefer, e.g., loading from a YAML file."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -833,6 +794,13 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"description": "Learn how to persist memories across chat sessions using the Teachability capability",
|
||||||
|
"tags": [
|
||||||
|
"teachability",
|
||||||
|
"capability"
|
||||||
|
]
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "flaml",
|
"display_name": "flaml",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
|
@ -1,13 +1,5 @@
|
||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_teaching.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
@ -22,57 +14,16 @@
|
||||||
"\n",
|
"\n",
|
||||||
"## Requirements\n",
|
"## Requirements\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
"````{=mdx}\n",
|
||||||
|
":::info Requirements\n",
|
||||||
|
"Install `pyautogen`:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install \"pyautogen>=0.2.3\"\n",
|
"pip install pyautogen\n",
|
||||||
"```"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# %pip install --quiet \"pyautogen>=0.2.3\""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Set your API Endpoint\n",
|
|
||||||
"\n",
|
|
||||||
"The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
|
|
||||||
"\n",
|
|
||||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
|
|
||||||
"\n",
|
|
||||||
"The json looks like the following:\n",
|
|
||||||
"```json\n",
|
|
||||||
"[\n",
|
|
||||||
" {\n",
|
|
||||||
" \"model\": \"gpt-4\",\n",
|
|
||||||
" \"api_key\": \"<your OpenAI API key here>\"\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" \"model\": \"gpt-4\",\n",
|
|
||||||
" \"api_key\": \"<your Azure OpenAI API key here>\",\n",
|
|
||||||
" \"base_url\": \"<your Azure OpenAI API base here>\",\n",
|
|
||||||
" \"api_type\": \"azure\",\n",
|
|
||||||
" \"api_version\": \"2024-02-15-preview\"\n",
|
|
||||||
" },\n",
|
|
||||||
" {\n",
|
|
||||||
" \"model\": \"gpt-4-32k\",\n",
|
|
||||||
" \"api_key\": \"<your Azure OpenAI API key here>\",\n",
|
|
||||||
" \"base_url\": \"<your Azure OpenAI API base here>\",\n",
|
|
||||||
" \"api_type\": \"azure\",\n",
|
|
||||||
" \"api_version\": \"2024-02-15-preview\"\n",
|
|
||||||
" }\n",
|
|
||||||
"]\n",
|
|
||||||
"```\n",
|
"```\n",
|
||||||
"\n",
|
"\n",
|
||||||
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n"
|
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||||
|
":::\n",
|
||||||
|
"````\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -99,6 +50,12 @@
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
"````{=mdx}\n",
|
||||||
|
":::tip\n",
|
||||||
|
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||||
|
":::\n",
|
||||||
|
"````\n",
|
||||||
|
"\n",
|
||||||
"## Example Task: Literature Survey\n",
|
"## Example Task: Literature Survey\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We consider a scenario where one needs to find research papers of a certain topic, categorize the application domains, and plot a bar chart of the number of papers in each domain."
|
"We consider a scenario where one needs to find research papers of a certain topic, categorize the application domains, and plot a bar chart of the number of papers in each domain."
|
||||||
|
@ -942,6 +899,12 @@
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"front_matter": {
|
||||||
|
"description": "Teach the agent news skills using natural language",
|
||||||
|
"tags": [
|
||||||
|
"teaching"
|
||||||
|
]
|
||||||
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "flaml-eval",
|
"display_name": "flaml-eval",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
|
|
Loading…
Reference in New Issue