mirror of https://github.com/microsoft/autogen.git
Uniform Interface for calling different LLMs (#2916)
* update * update * Minor tweaks on text --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com>
This commit is contained in:
parent
b1ec3ae545
commit
84f82dc5ec
|
@ -0,0 +1,398 @@
|
||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# A Uniform interface to call different LLMs\n",
|
||||||
|
"\n",
|
||||||
|
"Autogen provides a uniform interface for API calls to different LLMs, and creating LLM agents from them.\n",
|
||||||
|
"Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the [enhanced features](https://microsoft.github.io/autogen/docs/topics/llm-caching) such as [caching](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary) and [cost calculation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary)!\n",
|
||||||
|
"\n",
|
||||||
|
"In this notebook, we will show you how to use AutoGen to call different LLMs and create LLM agents from them.\n",
|
||||||
|
"\n",
|
||||||
|
"Currently, we support the following model families:\n",
|
||||||
|
"- [OpenAI](https://platform.openai.com/docs/overview)\n",
|
||||||
|
"- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service/?ef_id=_k_CjwKCAjwps-zBhAiEiwALwsVYdbpVkqA3IbY7WnxtrjNSefBnTfrijwRAFaYd8uuLCjeWsPdfZmxUBoC_ZAQAvD_BwE_k_&OCID=AIDcmm5edswduu_SEM__k_CjwKCAjwps-zBhAiEiwALwsVYdbpVkqA3IbY7WnxtrjNSefBnTfrijwRAFaYd8uuLCjeWsPdfZmxUBoC_ZAQAvD_BwE_k_&gad_source=1&gclid=CjwKCAjwps-zBhAiEiwALwsVYdbpVkqA3IbY7WnxtrjNSefBnTfrijwRAFaYd8uuLCjeWsPdfZmxUBoC_ZAQAvD_BwE)\n",
|
||||||
|
"- [Anthropic Claude](https://docs.anthropic.com/en/docs/welcome)\n",
|
||||||
|
"- [Google Gemini](https://ai.google.dev/gemini-api/docs)\n",
|
||||||
|
"- [Mistral](https://docs.mistral.ai/) (API to open and closed-source models)\n",
|
||||||
|
"- [DeepInfra](https://deepinfra.com/) (API to open-source models)\n",
|
||||||
|
"- [TogetherAI](https://www.together.ai/) (API to open-source models)\n",
|
||||||
|
"\n",
|
||||||
|
"... and more to come!\n",
|
||||||
|
"\n",
|
||||||
|
"You can also [plug in your local deployed LLM](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models) into AutoGen if needed."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Install required packages\n",
|
||||||
|
"\n",
|
||||||
|
"You may want to install AutoGen with options to different LLMs. Here we install AutoGen with all the supported LLMs.\n",
|
||||||
|
"By default, AutoGen is installed with OpenAI support.\n",
|
||||||
|
" \n",
|
||||||
|
"```bash\n",
|
||||||
|
"pip install pyautogen[gemini,anthropic,mistral,together]\n",
|
||||||
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"## Config list setup\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"First, create a `OAI_CONFIG_LIST` file to specify the api keys for the LLMs you want to use.\n",
|
||||||
|
"Generally, you just need to specify the `model`, `api_key` and `api_type` from the provider.\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"[\n",
|
||||||
|
" { \n",
|
||||||
|
" # using OpenAI\n",
|
||||||
|
" \"model\": \"gpt-35-turbo-1106\", \n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\"\n",
|
||||||
|
" # default api_type is openai\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" # using Azure OpenAI\n",
|
||||||
|
" \"model\": \"gpt-4-turbo-1106\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\",\n",
|
||||||
|
" \"api_type\": \"azure\",\n",
|
||||||
|
" \"base_url\": \"YOUR_BASE_URL\",\n",
|
||||||
|
" \"api_version\": \"YOUR_API_VERSION\"\n",
|
||||||
|
" },\n",
|
||||||
|
" { \n",
|
||||||
|
" # using Google gemini\n",
|
||||||
|
" \"model\": \"gemini-1.5-pro-latest\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\",\n",
|
||||||
|
" \"api_type\": \"google\"\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" # using DeepInfra\n",
|
||||||
|
" \"model\": \"meta-llama/Meta-Llama-3-70B-Instruct\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\",\n",
|
||||||
|
" \"base_url\": \"https://api.deepinfra.com/v1/openai\" # need to specify the base_url\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" # using Anthropic Claude\n",
|
||||||
|
" \"model\": \"claude-1.0\",\n",
|
||||||
|
" \"api_type\": \"anthropic\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\"\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" # using Mistral\n",
|
||||||
|
" \"model\": \"mistral-large-latest\",\n",
|
||||||
|
" \"api_type\": \"mistral\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\"\n",
|
||||||
|
" },\n",
|
||||||
|
" {\n",
|
||||||
|
" # using TogetherAI\n",
|
||||||
|
" \"model\": \"google/gemma-7b-it\",\n",
|
||||||
|
" \"api_key\": \"YOUR_API_KEY\",\n",
|
||||||
|
" \"api_type\": \"together\"\n",
|
||||||
|
" }\n",
|
||||||
|
" ...\n",
|
||||||
|
"]\n",
|
||||||
|
"```\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Uniform Interface to call different LLMs\n",
|
||||||
|
"We first demonstrate how to use AutoGen to call different LLMs with the same wrapper class.\n",
|
||||||
|
"\n",
|
||||||
|
"After you install relevant packages and setup your config list, you only need three steps to call different LLMs:\n",
|
||||||
|
"1. Extract the config with the model name you want to use.\n",
|
||||||
|
"2. create a client with the model name.\n",
|
||||||
|
"3. call the client `create` to get the response.\n",
|
||||||
|
"\n",
|
||||||
|
"Below, we define a helper function `model_call_example_function` to implement the above steps."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 15,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import autogen\n",
|
||||||
|
"from autogen import OpenAIWrapper\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def model_call_example_function(model: str, message: str, cache_seed: int = 41, print_cost: bool = False):\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" A helper function that demonstrates how to call different models using the OpenAIWrapper class.\n",
|
||||||
|
" Note the name `OpenAIWrapper` is not accurate, as now it is a wrapper for multiple models, not just OpenAI.\n",
|
||||||
|
" This might be changed in the future.\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" config_list = autogen.config_list_from_json(\n",
|
||||||
|
" \"OAI_CONFIG_LIST\",\n",
|
||||||
|
" filter_dict={\n",
|
||||||
|
" \"model\": [model],\n",
|
||||||
|
" },\n",
|
||||||
|
" )\n",
|
||||||
|
" client = OpenAIWrapper(config_list=config_list)\n",
|
||||||
|
" response = client.create(messages=[{\"role\": \"user\", \"content\": message}], cache_seed=cache_seed)\n",
|
||||||
|
"\n",
|
||||||
|
" print(f\"Response from model {model}: {response.choices[0].message.content}\")\n",
|
||||||
|
"\n",
|
||||||
|
" # Print the cost of the API call\n",
|
||||||
|
" if print_cost:\n",
|
||||||
|
" client.print_usage_summary()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Response from model gpt-35-turbo-1106: Why couldn't the bicycle stand up by itself?\n",
|
||||||
|
"\n",
|
||||||
|
"Because it was two-tired!\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"model_call_example_function(model=\"gpt-35-turbo-1106\", message=\"Tell me a joke.\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Response from model gemini-1.5-pro-latest: Why don't scientists trust atoms? \n",
|
||||||
|
"\n",
|
||||||
|
"Because they make up everything! \n",
|
||||||
|
" \n",
|
||||||
|
"Let me know if you'd like to hear another one! \n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"model_call_example_function(model=\"gemini-1.5-pro-latest\", message=\"Tell me a joke.\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Response from model meta-llama/Meta-Llama-3-70B-Instruct: Here's one:\n",
|
||||||
|
"\n",
|
||||||
|
"Why couldn't the bicycle stand up by itself?\n",
|
||||||
|
"\n",
|
||||||
|
"(wait for it...)\n",
|
||||||
|
"\n",
|
||||||
|
"Because it was two-tired!\n",
|
||||||
|
"\n",
|
||||||
|
"How was that? Do you want to hear another one?\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"model_call_example_function(model=\"meta-llama/Meta-Llama-3-70B-Instruct\", message=\"Tell me a joke. \")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 18,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Response from model mistral-large-latest: Sure, here's a light-hearted joke for you:\n",
|
||||||
|
"\n",
|
||||||
|
"Why don't scientists trust atoms?\n",
|
||||||
|
"\n",
|
||||||
|
"Because they make up everything!\n",
|
||||||
|
"----------------------------------------------------------------------------------------------------\n",
|
||||||
|
"Usage summary excluding cached usage: \n",
|
||||||
|
"Total cost: 0.00042\n",
|
||||||
|
"* Model 'mistral-large-latest': cost: 0.00042, prompt_tokens: 9, completion_tokens: 32, total_tokens: 41\n",
|
||||||
|
"\n",
|
||||||
|
"All completions are non-cached: the total cost with cached completions is the same as actual cost.\n",
|
||||||
|
"----------------------------------------------------------------------------------------------------\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"model_call_example_function(model=\"mistral-large-latest\", message=\"Tell me a joke. \", print_cost=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Using different LLMs in agents\n",
|
||||||
|
"Below we give a quick demo of using different LLMs agents in a groupchat. \n",
|
||||||
|
"\n",
|
||||||
|
"We mock a debate scenario where each LLM agent is a debater, either in affirmative or negative side. We use a round-robin strategy to let each debater from different teams to speak in turn."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 10,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def get_llm_config(model_name):\n",
|
||||||
|
" return {\n",
|
||||||
|
" \"config_list\": autogen.config_list_from_json(\"OAI_CONFIG_LIST\", filter_dict={\"model\": [model_name]}),\n",
|
||||||
|
" \"cache_seed\": 41,\n",
|
||||||
|
" }\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"affirmative_system_message = \"You are in the Affirmative team of a debate. When it is your turn, please give at least one reason why you are for the topic. Keep it short.\"\n",
|
||||||
|
"negative_system_message = \"You are in the Negative team of a debate. The affirmative team has given their reason, please counter their argument. Keep it short.\"\n",
|
||||||
|
"\n",
|
||||||
|
"gpt35_agent = autogen.AssistantAgent(\n",
|
||||||
|
" name=\"GPT35\", system_message=affirmative_system_message, llm_config=get_llm_config(\"gpt-35-turbo-1106\")\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"llama_agent = autogen.AssistantAgent(\n",
|
||||||
|
" name=\"Llama3\",\n",
|
||||||
|
" system_message=negative_system_message,\n",
|
||||||
|
" llm_config=get_llm_config(\"meta-llama/Meta-Llama-3-70B-Instruct\"),\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"mistral_agent = autogen.AssistantAgent(\n",
|
||||||
|
" name=\"Mistral\", system_message=affirmative_system_message, llm_config=get_llm_config(\"mistral-large-latest\")\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"gemini_agent = autogen.AssistantAgent(\n",
|
||||||
|
" name=\"Gemini\", system_message=negative_system_message, llm_config=get_llm_config(\"gemini-1.5-pro-latest\")\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"claude_agent = autogen.AssistantAgent(\n",
|
||||||
|
" name=\"Claude\", system_message=affirmative_system_message, llm_config=get_llm_config(\"claude-3-opus-20240229\")\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"user_proxy = autogen.UserProxyAgent(\n",
|
||||||
|
" name=\"User\",\n",
|
||||||
|
" code_execution_config=False,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# initilize the groupchat with round robin speaker selection method\n",
|
||||||
|
"groupchat = autogen.GroupChat(\n",
|
||||||
|
" agents=[claude_agent, gemini_agent, mistral_agent, llama_agent, gpt35_agent, user_proxy],\n",
|
||||||
|
" messages=[],\n",
|
||||||
|
" max_round=8,\n",
|
||||||
|
" speaker_selection_method=\"round_robin\",\n",
|
||||||
|
")\n",
|
||||||
|
"manager = autogen.GroupChatManager(groupchat=groupchat)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 12,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"\u001b[33mUser\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"Debate Topic: Should vaccination be mandatory?\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: Claude\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[33mClaude\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"As a member of the Affirmative team, I believe that vaccination should be mandatory for several reasons:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Herd immunity: When a large percentage of the population is vaccinated, it helps protect those who cannot receive vaccines due to medical reasons or weakened immune systems. Mandatory vaccination ensures that we maintain a high level of herd immunity, preventing the spread of dangerous diseases.\n",
|
||||||
|
"\n",
|
||||||
|
"2. Public health: Vaccines have been proven to be safe and effective in preventing the spread of infectious diseases. By making vaccination mandatory, we prioritize public health and reduce the risk of outbreaks that could lead to widespread illness and loss of life.\n",
|
||||||
|
"\n",
|
||||||
|
"3. Societal benefits: Mandatory vaccination not only protects individuals but also benefits society as a whole. It reduces healthcare costs associated with treating preventable diseases and minimizes the economic impact of disease outbreaks on businesses and communities.\n",
|
||||||
|
"\n",
|
||||||
|
"In summary, mandatory vaccination is a critical tool in protecting public health, maintaining herd immunity, and promoting the well-being of our society.\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: Gemini\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[33mGemini\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"While we acknowledge the importance of herd immunity and public health, mandating vaccinations infringes upon individual autonomy and medical freedom. Blanket mandates fail to consider individual health circumstances and potential vaccine risks, which are often overlooked in favor of a one-size-fits-all approach. \n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: Mistral\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[33mMistral\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"I understand your concerns and the value of individual autonomy. However, it's important to note that mandatory vaccination policies often include exemptions for medical reasons. This allows for individual health circumstances to be taken into account, ensuring that those who cannot safely receive vaccines are not put at risk. The goal is to strike a balance between protecting public health and respecting individual choices, while always prioritizing the well-being and safety of all members of society.\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: Llama3\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[33mLlama3\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"I understand your point, but blanket exemptions for medical reasons are not sufficient to address the complexities of individual health circumstances. What about those who have experienced adverse reactions to vaccines in the past or have a family history of such reactions? What about those who have compromised immune systems or are taking medications that may interact with vaccine components? A one-size-fits-all approach to vaccination ignores the nuances of individual health and puts some people at risk of harm. Additionally, mandating vaccination undermines trust in government and healthcare institutions, leading to further divides and mistrust. We need to prioritize informed consent and individual autonomy in medical decisions, rather than relying solely on a blanket mandate.\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: GPT35\n",
|
||||||
|
"\u001b[0m\n",
|
||||||
|
"\u001b[33mGPT35\u001b[0m (to chat_manager):\n",
|
||||||
|
"\n",
|
||||||
|
"I understand your point, but mandatory vaccination policies can still allow for exemptions based on medical history, allergic reactions, and compromised immunity. This would address the individual circumstances you mentioned. Furthermore, mandating vaccination can also help strengthen trust in public health measures by demonstrating a commitment to protecting the entire community. Informed consent is important, but it is also essential to consider the potential consequences of not being vaccinated on public health and the well-being of others.\n",
|
||||||
|
"\n",
|
||||||
|
"--------------------------------------------------------------------------------\n",
|
||||||
|
"\u001b[32m\n",
|
||||||
|
"Next speaker: User\n",
|
||||||
|
"\u001b[0m\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"chat_history = user_proxy.initiate_chat(recipient=manager, message=\"Debate Topic: Should vaccination be mandatory?\")"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "autodev",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.12.4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
Loading…
Reference in New Issue