mirror of https://github.com/microsoft/autogen.git
Use config list in notebook (#2537)
This commit is contained in:
parent
1b8d65df0a
commit
c94b5c6a61
|
@ -30,14 +30,17 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from IPython.display import Image, display\n",
|
||||
"\n",
|
||||
"import autogen\n",
|
||||
"from autogen.coding import LocalCommandLineCodeExecutor\n",
|
||||
"\n",
|
||||
"config_list = [{\"model\": \"gpt-4\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")}]"
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" \"OAI_CONFIG_LIST\",\n",
|
||||
" filter_dict={\"tags\": [\"gpt-4\"]}, # comment out to get all\n",
|
||||
")\n",
|
||||
"# When using a single openai endpoint, you can use the following:\n",
|
||||
"# config_list = [{\"model\": \"gpt-4\", \"api_key\": os.getenv(\"OPENAI_API_KEY\")}]\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -882,7 +885,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.5"
|
||||
"version": "3.10.14"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
|
|
@ -1,342 +1,342 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# LLM Configuration\n",
|
||||
"\n",
|
||||
"In AutoGen, agents use LLMs as key components to understand and react. To configure an agent's access to LLMs, you can specify an `llm_config` argument in its constructor. For example, the following snippet shows a configuration that uses `gpt-4`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"llm_config = {\n",
|
||||
" \"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}],\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"````{=mdx}\n",
|
||||
":::warning\n",
|
||||
"It is important to never commit secrets into your code, therefore we read the OpenAI API key from an environment variable.\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"This `llm_config` can then be passed to an agent's constructor to enable it to use the LLM."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import autogen\n",
|
||||
"\n",
|
||||
"assistant = autogen.AssistantAgent(name=\"assistant\", llm_config=llm_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## Introduction to `config_list`\n",
|
||||
"\n",
|
||||
"Different tasks may require different models, and the `config_list` allows specifying the different endpoints and configurations that are to be used. It is a list of dictionaries, each of which contains the following keys depending on the kind of endpoint being used:\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
"import Tabs from '@theme/Tabs';\n",
|
||||
"import TabItem from '@theme/TabItem';\n",
|
||||
"\n",
|
||||
"<Tabs>\n",
|
||||
" <TabItem value=\"openai\" label=\"OpenAI\" default>\n",
|
||||
" - `model` (str, required): The identifier of the model to be used, such as 'gpt-4', 'gpt-3.5-turbo'.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"api_key\": os.environ['OPENAI_API_KEY']\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
" <TabItem value=\"azureopenai\" label=\"Azure OpenAI\">\n",
|
||||
" - `model` (str, required): The deployment to be used. The model corresponds to the deployment name on Azure OpenAI.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `api_type`: `azure`\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `api_version` (str, optional): The version of the Azure API you wish to use.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"my-gpt-4-deployment\",\n",
|
||||
" \"api_type\": \"azure\",\n",
|
||||
" \"api_key\": os.environ['AZURE_OPENAI_API_KEY'],\n",
|
||||
" \"base_url\": \"https://ENDPOINT.openai.azure.com/\",\n",
|
||||
" \"api_version\": \"2024-02-15-preview\"\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
" <TabItem value=\"other\" label=\"Other OpenAI compatible\">\n",
|
||||
" - `model` (str, required): The identifier of the model to be used, such as 'llama-7B'.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"llama-7B\",\n",
|
||||
" \"base_url\": \"http://localhost:1234\"\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
"</Tabs>\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"By default this will create a model client which assumes an OpenAI API (or compatible) endpoint. To use custom model clients, see [here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_custom_model.ipynb).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"### `OAI_CONFIG_LIST` pattern\n",
|
||||
"\n",
|
||||
"A common, useful pattern used is to define this `config_list` is via JSON (specified as a file or an environment variable set to a JSON-formatted string) and then use the [`config_list_from_json`](/docs/reference/oai/openai_utils#config_list_from_json) helper function to load it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" env_or_file=\"OAI_CONFIG_LIST\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Then, create the assistant agent with the config list\n",
|
||||
"assistant = autogen.AssistantAgent(name=\"assistant\", llm_config={\"config_list\": config_list})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This can be helpful as it keeps all the configuration in one place across different projects or notebooks.\n",
|
||||
"\n",
|
||||
"This function interprets the `env_or_file` argument as follows:\n",
|
||||
"\n",
|
||||
"- If `env_or_file` is an environment variable then:\n",
|
||||
" - It will first try to load the file from the path specified in the environment variable.\n",
|
||||
" - If there is no file, it will try to interpret the environment variable as a JSON string.\n",
|
||||
"- Otherwise, it will try to open the file at the path specified by `env_or_file`.\n",
|
||||
"\n",
|
||||
"### Why is it a list?\n",
|
||||
"\n",
|
||||
"Being a list allows you to define multiple models that can be used by the agent. This is useful for a few reasons:\n",
|
||||
"\n",
|
||||
"- If one model times out or fails, the agent can try another model.\n",
|
||||
"- Having a single global list of models and [filtering it](#config-list-filtering) based on certain keys (e.g. name, tag) in order to pass select models into a certain agent (e.g. use cheaper GPT 3.5 for agents solving easier tasks)\n",
|
||||
"- While the core agents, (e.g. conversable or assistant) do not have special logic around selecting configs, some of the specialized agents *may* have logic to select the best model based on the task at hand.\n",
|
||||
"\n",
|
||||
"### How does an agent decide which model to pick out of the list?\n",
|
||||
"\n",
|
||||
"An agent uses the very first model available in the \"config_list\" and makes LLM calls against this model. If the model fails (e.g. API throttling) the agent will retry the request against the 2nd model and so on until prompt completion is received (or throws an error if none of the models successfully completes the request). In general there's no implicit/hidden logic inside agents that is used to pick \"the best model for the task\". However, some specialized agents may attempt to choose \"the best model for the task\". It is developers responsibility to pick the right models and use them with agents.\n",
|
||||
"\n",
|
||||
"### Config list filtering\n",
|
||||
"\n",
|
||||
"As described above the list can be filtered based on certain criteria. This is defined as a dictionary of key to filter on and value to filter by. For example, if you have a list of configs and you want to select the one with the model \"gpt-3.5-turbo\" you can use the following filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"filter_dict = {\"model\": \"gpt-3.5-turbo\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"This can then be applied to a config list loaded in Python with [`filter_config`](/docs/reference/oai/openai_utils#filter_config):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.filter_config(config_list, filter_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Or, directly when loading the config list using [`config_list_from_json`](/docs/reference/oai/openai_utils#config_list_from_json):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\", filter_dict=filter_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Tags\n",
|
||||
"\n",
|
||||
"Model names can differ between OpenAI and Azure OpenAI, so tags offer an easy way to smooth over this inconsistency. Tags are a list of strings in the `config_list`, for example for the following `config_list`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = [\n",
|
||||
" {\"model\": \"my-gpt-4-deployment\", \"api_key\": \"\", \"tags\": [\"gpt4\", \"openai\"]},\n",
|
||||
" {\"model\": \"llama-7B\", \"base_url\": \"http://127.0.0.1:8080\", \"tags\": [\"llama\", \"local\"]},\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then when filtering the `config_list` you can can specify the desired tags. A config is selected if it has at least one of the tags specified in the filter. For example, to just get the `llama` model, you can use the following filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"filter_dict = {\"tags\": [\"llama\", \"another_tag\"]}\n",
|
||||
"config_list = autogen.filter_config(config_list, filter_dict)\n",
|
||||
"assert len(config_list) == 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Other configuration parameters\n",
|
||||
"\n",
|
||||
"Besides the `config_list`, there are other parameters that can be used to configure the LLM. These are split between parameters specifically used by Autogen and those passed into the model client.\n",
|
||||
"\n",
|
||||
"### AutoGen specific parameters\n",
|
||||
"\n",
|
||||
"- `cache_seed` - This is a legacy parameter and not recommended to be used unless the reason for using it is to disable the default caching behavior. To disable default caching, set this to `None`. Otherwise, by default or if an int is passed the [DiskCache](/docs/reference/cache/disk_cache) will be used. For the new way of using caching, pass a [Cache](/docs/reference/cache/) object into [`initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate_chat).\n",
|
||||
"\n",
|
||||
"### Extra model client parameters\n",
|
||||
"\n",
|
||||
"It is also possible to passthrough parameters through to the OpenAI client. Parameters that correspond to the [`OpenAI` client](https://github.com/openai/openai-python/blob/d231d1fa783967c1d3a1db3ba1b52647fff148ac/src/openai/_client.py#L67) or the [`OpenAI` completions create API](https://github.com/openai/openai-python/blob/d231d1fa783967c1d3a1db3ba1b52647fff148ac/src/openai/resources/completions.py#L35) can be supplied.\n",
|
||||
"\n",
|
||||
"This is commonly used for things like `temperature`, or `timeout`.\n",
|
||||
"\n",
|
||||
"## Example\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_config = {\n",
|
||||
" \"config_list\": [\n",
|
||||
" {\n",
|
||||
" \"model\": \"my-gpt-4-deployment\",\n",
|
||||
" \"api_key\": os.environ.get(\"AZURE_OPENAI_API_KEY\"),\n",
|
||||
" \"api_type\": \"azure\",\n",
|
||||
" \"base_url\": os.environ.get(\"AZURE_OPENAI_API_BASE\"),\n",
|
||||
" \"api_version\": \"2024-02-15-preview\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model\": \"llama-7B\",\n",
|
||||
" \"base_url\": \"http://127.0.0.1:8080\",\n",
|
||||
" \"api_type\": \"openai\",\n",
|
||||
" },\n",
|
||||
" ],\n",
|
||||
" \"temperature\": 0.9,\n",
|
||||
" \"timeout\": 300,\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Other helpers for loading a config list\n",
|
||||
"\n",
|
||||
"- [`get_config_list`](/docs/reference/oai/openai_utils#get_config_list): Generates configurations for API calls, primarily from provided API keys.\n",
|
||||
"- [`config_list_openai_aoai`](/docs/reference/oai/openai_utils#config_list_openai_aoai): Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files.\n",
|
||||
"- [`config_list_from_models`](/docs/reference/oai/openai_utils#config_list_from_models): Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration.\n",
|
||||
"- [`config_list_from_dotenv`](/docs/reference/oai/openai_utils#config_list_from_dotenv): Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file.\n",
|
||||
"\n",
|
||||
"See [this notebook](https://github.com/microsoft/autogen/blob/main/notebook/config_loader_utility_functions.ipynb) for examples of using the above functions."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "masterclass",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# LLM Configuration\n",
|
||||
"\n",
|
||||
"In AutoGen, agents use LLMs as key components to understand and react. To configure an agent's access to LLMs, you can specify an `llm_config` argument in its constructor. For example, the following snippet shows a configuration that uses `gpt-4`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"llm_config = {\n",
|
||||
" \"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}],\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"````{=mdx}\n",
|
||||
":::warning\n",
|
||||
"It is important to never commit secrets into your code, therefore we read the OpenAI API key from an environment variable.\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"This `llm_config` can then be passed to an agent's constructor to enable it to use the LLM."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import autogen\n",
|
||||
"\n",
|
||||
"assistant = autogen.AssistantAgent(name=\"assistant\", llm_config=llm_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## Introduction to `config_list`\n",
|
||||
"\n",
|
||||
"Different tasks may require different models, and the `config_list` allows specifying the different endpoints and configurations that are to be used. It is a list of dictionaries, each of which contains the following keys depending on the kind of endpoint being used:\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
"import Tabs from '@theme/Tabs';\n",
|
||||
"import TabItem from '@theme/TabItem';\n",
|
||||
"\n",
|
||||
"<Tabs>\n",
|
||||
" <TabItem value=\"openai\" label=\"OpenAI\" default>\n",
|
||||
" - `model` (str, required): The identifier of the model to be used, such as 'gpt-4', 'gpt-3.5-turbo'.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"api_key\": os.environ['OPENAI_API_KEY']\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
" <TabItem value=\"azureopenai\" label=\"Azure OpenAI\">\n",
|
||||
" - `model` (str, required): The deployment to be used. The model corresponds to the deployment name on Azure OpenAI.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `api_type`: `azure`\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `api_version` (str, optional): The version of the Azure API you wish to use.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"my-gpt-4-deployment\",\n",
|
||||
" \"api_type\": \"azure\",\n",
|
||||
" \"api_key\": os.environ['AZURE_OPENAI_API_KEY'],\n",
|
||||
" \"base_url\": \"https://ENDPOINT.openai.azure.com/\",\n",
|
||||
" \"api_version\": \"2024-02-15-preview\"\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
" <TabItem value=\"other\" label=\"Other OpenAI compatible\">\n",
|
||||
" - `model` (str, required): The identifier of the model to be used, such as 'llama-7B'.\n",
|
||||
" - `api_key` (str, optional): The API key required for authenticating requests to the model's API endpoint.\n",
|
||||
" - `base_url` (str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.\n",
|
||||
" - `tags` (List[str], optional): Tags which can be used for filtering.\n",
|
||||
"\n",
|
||||
" Example:\n",
|
||||
" ```json\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"model\": \"llama-7B\",\n",
|
||||
" \"base_url\": \"http://localhost:1234\"\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
" ```\n",
|
||||
" </TabItem>\n",
|
||||
"</Tabs>\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"By default this will create a model client which assumes an OpenAI API (or compatible) endpoint. To use custom model clients, see [here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_custom_model.ipynb).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"### `OAI_CONFIG_LIST` pattern\n",
|
||||
"\n",
|
||||
"A common, useful pattern used is to define this `config_list` is via JSON (specified as a file or an environment variable set to a JSON-formatted string) and then use the [`config_list_from_json`](/docs/reference/oai/openai_utils#config_list_from_json) helper function to load it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" env_or_file=\"OAI_CONFIG_LIST\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Then, create the assistant agent with the config list\n",
|
||||
"assistant = autogen.AssistantAgent(name=\"assistant\", llm_config={\"config_list\": config_list})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This can be helpful as it keeps all the configuration in one place across different projects or notebooks.\n",
|
||||
"\n",
|
||||
"This function interprets the `env_or_file` argument as follows:\n",
|
||||
"\n",
|
||||
"- If `env_or_file` is an environment variable then:\n",
|
||||
" - It will first try to load the file from the path specified in the environment variable.\n",
|
||||
" - If there is no file, it will try to interpret the environment variable as a JSON string.\n",
|
||||
"- Otherwise, it will try to open the file at the path specified by `env_or_file`.\n",
|
||||
"\n",
|
||||
"### Why is it a list?\n",
|
||||
"\n",
|
||||
"Being a list allows you to define multiple models that can be used by the agent. This is useful for a few reasons:\n",
|
||||
"\n",
|
||||
"- If one model times out or fails, the agent can try another model.\n",
|
||||
"- Having a single global list of models and [filtering it](#config-list-filtering) based on certain keys (e.g. name, tag) in order to pass select models into a certain agent (e.g. use cheaper GPT 3.5 for agents solving easier tasks)\n",
|
||||
"- While the core agents, (e.g. conversable or assistant) do not have special logic around selecting configs, some of the specialized agents *may* have logic to select the best model based on the task at hand.\n",
|
||||
"\n",
|
||||
"### How does an agent decide which model to pick out of the list?\n",
|
||||
"\n",
|
||||
"An agent uses the very first model available in the \"config_list\" and makes LLM calls against this model. If the model fails (e.g. API throttling) the agent will retry the request against the 2nd model and so on until prompt completion is received (or throws an error if none of the models successfully completes the request). In general there's no implicit/hidden logic inside agents that is used to pick \"the best model for the task\". However, some specialized agents may attempt to choose \"the best model for the task\". It is developers responsibility to pick the right models and use them with agents.\n",
|
||||
"\n",
|
||||
"### Config list filtering\n",
|
||||
"\n",
|
||||
"As described above the list can be filtered based on certain criteria. This is defined as a dictionary of key to filter on and values to filter by. For example, if you have a list of configs and you want to select the one with the model \"gpt-3.5-turbo\" you can use the following filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"filter_dict = {\"model\": [\"gpt-3.5-turbo\"]}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"This can then be applied to a config list loaded in Python with [`filter_config`](/docs/reference/oai/openai_utils#filter_config):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.filter_config(config_list, filter_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Or, directly when loading the config list using [`config_list_from_json`](/docs/reference/oai/openai_utils#config_list_from_json):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\", filter_dict=filter_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Tags\n",
|
||||
"\n",
|
||||
"Model names can differ between OpenAI and Azure OpenAI, so tags offer an easy way to smooth over this inconsistency. Tags are a list of strings in the `config_list`, for example for the following `config_list`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list = [\n",
|
||||
" {\"model\": \"my-gpt-4-deployment\", \"api_key\": \"\", \"tags\": [\"gpt4\", \"openai\"]},\n",
|
||||
" {\"model\": \"llama-7B\", \"base_url\": \"http://127.0.0.1:8080\", \"tags\": [\"llama\", \"local\"]},\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then when filtering the `config_list` you can can specify the desired tags. A config is selected if it has at least one of the tags specified in the filter. For example, to just get the `llama` model, you can use the following filter:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"filter_dict = {\"tags\": [\"llama\", \"another_tag\"]}\n",
|
||||
"config_list = autogen.filter_config(config_list, filter_dict)\n",
|
||||
"assert len(config_list) == 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Other configuration parameters\n",
|
||||
"\n",
|
||||
"Besides the `config_list`, there are other parameters that can be used to configure the LLM. These are split between parameters specifically used by Autogen and those passed into the model client.\n",
|
||||
"\n",
|
||||
"### AutoGen specific parameters\n",
|
||||
"\n",
|
||||
"- `cache_seed` - This is a legacy parameter and not recommended to be used unless the reason for using it is to disable the default caching behavior. To disable default caching, set this to `None`. Otherwise, by default or if an int is passed the [DiskCache](/docs/reference/cache/disk_cache) will be used. For the new way of using caching, pass a [Cache](/docs/reference/cache/) object into [`initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate_chat).\n",
|
||||
"\n",
|
||||
"### Extra model client parameters\n",
|
||||
"\n",
|
||||
"It is also possible to passthrough parameters through to the OpenAI client. Parameters that correspond to the [`OpenAI` client](https://github.com/openai/openai-python/blob/d231d1fa783967c1d3a1db3ba1b52647fff148ac/src/openai/_client.py#L67) or the [`OpenAI` completions create API](https://github.com/openai/openai-python/blob/d231d1fa783967c1d3a1db3ba1b52647fff148ac/src/openai/resources/completions.py#L35) can be supplied.\n",
|
||||
"\n",
|
||||
"This is commonly used for things like `temperature`, or `timeout`.\n",
|
||||
"\n",
|
||||
"## Example\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_config = {\n",
|
||||
" \"config_list\": [\n",
|
||||
" {\n",
|
||||
" \"model\": \"my-gpt-4-deployment\",\n",
|
||||
" \"api_key\": os.environ.get(\"AZURE_OPENAI_API_KEY\"),\n",
|
||||
" \"api_type\": \"azure\",\n",
|
||||
" \"base_url\": os.environ.get(\"AZURE_OPENAI_API_BASE\"),\n",
|
||||
" \"api_version\": \"2024-02-15-preview\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model\": \"llama-7B\",\n",
|
||||
" \"base_url\": \"http://127.0.0.1:8080\",\n",
|
||||
" \"api_type\": \"openai\",\n",
|
||||
" },\n",
|
||||
" ],\n",
|
||||
" \"temperature\": 0.9,\n",
|
||||
" \"timeout\": 300,\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Other helpers for loading a config list\n",
|
||||
"\n",
|
||||
"- [`get_config_list`](/docs/reference/oai/openai_utils#get_config_list): Generates configurations for API calls, primarily from provided API keys.\n",
|
||||
"- [`config_list_openai_aoai`](/docs/reference/oai/openai_utils#config_list_openai_aoai): Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files.\n",
|
||||
"- [`config_list_from_models`](/docs/reference/oai/openai_utils#config_list_from_models): Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration.\n",
|
||||
"- [`config_list_from_dotenv`](/docs/reference/oai/openai_utils#config_list_from_dotenv): Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file.\n",
|
||||
"\n",
|
||||
"See [this notebook](https://github.com/microsoft/autogen/blob/main/notebook/config_loader_utility_functions.ipynb) for examples of using the above functions."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "masterclass",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue