autogen/notebook/agentchat_lmm_llava.ipynb

883 lines
462 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"id": "2c75da30",
"metadata": {},
"source": [
2023-11-09 07:39:02 +08:00
"# Agent Chat with Multimodal Models: LLaVA\n",
"\n",
2023-11-09 07:39:02 +08:00
"This notebook uses **LLaVA** as an example for the multimodal feature. More information about LLaVA can be found in their [GitHub page](https://github.com/haotian-liu/LLaVA)\n",
"\n",
"\n",
"This notebook contains the following information and examples:\n",
"\n",
"1. Setup LLaVA Model\n",
" - Option 1: Use [API calls from `Replicate`](#replicate)\n",
" - Option 2: Setup [LLaVA locally (requires GPU)](#local)\n",
"2. Application 1: [Image Chat](#app-1)\n",
"3. Application 2: [Figure Creator](#app-2)"
]
},
{
"cell_type": "markdown",
"id": "5f51914c",
"metadata": {},
"source": [
"### Before everything starts, install AutoGen with the `lmm` option\n",
"```bash\n",
"pip install \"autogen-agentchat[lmm]~=0.2\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b1ffe2ab",
"metadata": {},
"outputs": [],
"source": [
"# We use this variable to control where you want to host LLaVA, locally or remotely?\n",
"# More details in the two setup options below.\n",
"import json\n",
"import os\n",
"import random\n",
"import time\n",
"from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union\n",
"\n",
"import matplotlib.pyplot as plt\n",
"import requests\n",
"from PIL import Image\n",
"from termcolor import colored\n",
"\n",
"import autogen\n",
"from autogen import Agent, AssistantAgent, ConversableAgent, UserProxyAgent\n",
"from autogen.agentchat.contrib.llava_agent import LLaVAAgent, llava_call\n",
"\n",
"LLAVA_MODE = \"remote\" # Either \"local\" or \"remote\"\n",
"assert LLAVA_MODE in [\"local\", \"remote\"]"
]
},
{
"cell_type": "markdown",
"id": "acc4703b",
"metadata": {},
"source": [
"<a id=\"replicate\"></a>\n",
"## (Option 1, preferred) Use API Calls from Replicate [Remote]\n",
"We can also use [Replicate](https://replicate.com/yorickvp/llava-13b/api) to use LLaVA directly, which will host the model for you.\n",
"\n",
"1. Run `pip install replicate` to install the package\n",
"2. You need to get an API key from Replicate from your [account setting page](https://replicate.com/account/api-tokens)\n",
"3. Next, copy your API token and authenticate by setting it as an environment variable:\n",
" `export REPLICATE_API_TOKEN=<paste-your-token-here>` \n",
"4. You need to enter your credit card information for Replicate 🥲\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f650bf3d",
"metadata": {},
"outputs": [],
"source": [
"# pip install replicate\n",
"# import os\n",
"## alternatively, you can put your API key here for the environment variable.\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = \"r8_xyz your api key goes here~\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "267ffd78",
"metadata": {},
"outputs": [],
"source": [
"if LLAVA_MODE == \"remote\":\n",
" import replicate\n",
"\n",
" llava_config_list = [\n",
" {\n",
" \"model\": \"whatever, will be ignored for remote\", # The model name doesn't matter here right now.\n",
" \"api_key\": \"None\", # Note that you have to setup the API key with os.environ[\"REPLICATE_API_TOKEN\"]\n",
" \"base_url\": \"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591\",\n",
" }\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "1805e4bd",
"metadata": {},
"source": [
"<a id=\"local\"></a>\n",
"## [Option 2] Setup LLaVA Locally\n",
"\n",
"\n",
"## Install the LLaVA library\n",
"\n",
"Please follow the LLaVA GitHub [page](https://github.com/haotian-liu/LLaVA/) to install LLaVA.\n",
"\n",
"\n",
"#### Download the package\n",
"```bash\n",
"git clone https://github.com/haotian-liu/LLaVA.git\n",
"cd LLaVA\n",
"```\n",
"\n",
"#### Install the inference package\n",
"```bash\n",
"conda create -n llava python=3.10 -y\n",
"conda activate llava\n",
"pip install --upgrade pip # enable PEP 660 support\n",
"pip install -e .\n",
"```\n",
"\n",
"\n",
"\n",
"Some helpful packages and dependencies:\n",
"```bash\n",
"conda install -c nvidia cuda-toolkit\n",
"```\n",
"\n",
"\n",
"### Launch\n",
"\n",
"In one terminal, start the controller first:\n",
"```bash\n",
"python -m llava.serve.controller --host 0.0.0.0 --port 10000\n",
"```\n",
"\n",
"\n",
"Then, in another terminal, start the worker, which will load the model to the GPU:\n",
"```bash\n",
"python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b\n",
"``"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "93bf7915",
"metadata": {},
"outputs": [],
"source": [
"# Run this code block only if you want to run LlaVA locally\n",
"if LLAVA_MODE == \"local\":\n",
" llava_config_list = [\n",
" {\n",
" \"model\": \"llava-v1.5-13b\",\n",
" \"api_key\": \"None\",\n",
" \"base_url\": \"http://0.0.0.0:10000\",\n",
" }\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "307852dd",
"metadata": {},
"source": [
"# Multimodal Functions\n",
"\n",
"We cal test the `llava_call` function with the following AutoGen image.\n",
"![](https://raw.githubusercontent.com/microsoft/autogen/main/website/static/img/autogen_agentchat.png)\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7c1be77f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The AutoGen framework is a tool for creating and managing conversational agents. It allows for the creation of multiple-agent conversations, enabling complex interactions between different agents. The framework is designed to be flexible and scalable, allowing for the addition of new agents and conversations as needed.\n",
"\n",
"The framework consists of three main components:\n",
"\n",
"1. Agents: These are the individual conversational entities that can be created and managed within the framework. Each agent has its own unique set of conversational capabilities and can engage in conversations with other agents.\n",
"\n",
"2. Conversations: These are the interactions between agents, which can be managed and directed by the framework. Conversations can be structured and organized to facilitate efficient communication between agents.\n",
"\n",
"3. Flexibility: The framework is designed to be flexible, allowing for the addition of new agents and conversations as needed. This flexibility enables the framework to adapt to changing requirements and facilitate the development of more complex conversational systems.\n"
]
}
],
"source": [
"rst = llava_call(\n",
" \"Describe this AutoGen framework <img https://raw.githubusercontent.com/microsoft/autogen/main/website/static/img/autogen_agentchat.png> with bullet points.\",\n",
" llm_config={\"config_list\": llava_config_list, \"temperature\": 0},\n",
")\n",
"\n",
"print(rst)"
]
},
{
"cell_type": "markdown",
"id": "7e4faf59",
"metadata": {},
"source": [
"<a id=\"app-1\"></a>\n",
"## Application 1: Image Chat\n",
"\n",
"In this section, we present a straightforward dual-agent architecture to enable user to chat with a multimodal agent.\n",
"\n",
"\n",
"First, we show this image and ask a question.\n",
"![](https://th.bing.com/th/id/R.422068ce8af4e15b0634fe2540adea7a?rik=y4OcXBE%2fqutDOw&pid=ImgRaw&r=0)"
]
},
{
"cell_type": "markdown",
"id": "e3d5580e",
"metadata": {},
"source": [
"Within the user proxy agent, we can decide to activate the human input mode or not (for here, we use human_input_mode=\"NEVER\" for conciseness). This allows you to interact with LLaVA in a multi-round dialogue, enabling you to provide feedback as the conversation unfolds."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "67157629",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to image-explainer):\n",
"\n",
"What's the breed of this dog? \n",
"<image>.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[34mYou are an AI agent and you can view images.\n",
"###Human: What's the breed of this dog? \n",
"<image>.\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mimage-explainer\u001b[0m (to User_proxy):\n",
"\n",
"The breed of the dog in the image is a poodle.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"image_agent = LLaVAAgent(\n",
" name=\"image-explainer\",\n",
" max_consecutive_auto_reply=10,\n",
" llm_config={\"config_list\": llava_config_list, \"temperature\": 0.5, \"max_new_tokens\": 1000},\n",
")\n",
"\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \"groupchat\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" human_input_mode=\"NEVER\", # Try between ALWAYS or NEVER\n",
" max_consecutive_auto_reply=0,\n",
")\n",
"\n",
"# Ask the question with an image\n",
"user_proxy.initiate_chat(\n",
" image_agent,\n",
" message=\"\"\"What's the breed of this dog?\n",
"<img https://th.bing.com/th/id/R.422068ce8af4e15b0634fe2540adea7a?rik=y4OcXBE%2fqutDOw&pid=ImgRaw&r=0>.\"\"\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3f60521d",
"metadata": {},
"source": [
"Now, input another image, and ask a followup question.\n",
"\n",
"![](https://th.bing.com/th/id/OIP.29Mi2kJmcHHyQVGe_0NG7QHaEo?pid=ImgDet&rs=1)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "73a2b234",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to image-explainer):\n",
"\n",
"What is this breed? \n",
"<image>\n",
"\n",
"Among the breeds, which one barks less?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[34mYou are an AI agent and you can view images.\n",
"###Human: What's the breed of this dog? \n",
"<image>.\n",
"###Assistant: The breed of the dog in the image is a poodle.\n",
"###Human: What is this breed? \n",
"<image>\n",
"\n",
"Among the breeds, which one barks less?\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mimage-explainer\u001b[0m (to User_proxy):\n",
"\n",
"Among the breeds, poodles tend to bark less compared to other breeds. However, it is important to note that individual dogs may have different temperaments and barking habits, regardless of their breed.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"# Ask the question with an image\n",
"user_proxy.send(\n",
" message=\"\"\"What is this breed?\n",
"<img https://th.bing.com/th/id/OIP.29Mi2kJmcHHyQVGe_0NG7QHaEo?pid=ImgDet&rs=1>\n",
"\n",
"Among the breeds, which one barks less?\"\"\",\n",
" recipient=image_agent,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0c40d0eb",
"metadata": {},
"source": [
"<a id=\"app-2\"></a>\n",
"## Application 2: Figure Creator\n",
"\n",
"Here, we define a `FigureCreator` agent, which contains three child agents: commander, coder, and critics.\n",
"\n",
"- Commander: interacts with users, runs code, and coordinates the flow between the coder and critics.\n",
"- Coder: writes code for visualization.\n",
"- Critics: LLaVA-based agent that provides comments and feedback on the generated image."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e8eca993",
"metadata": {},
"outputs": [],
"source": [
"class FigureCreator(AssistantAgent):\n",
" def __init__(self, n_iters=2, **kwargs):\n",
" \"\"\"\n",
" Initializes a FigureCreator instance.\n",
"\n",
" This agent facilitates the creation of visualizations through a collaborative effort among its child agents: commander, coder, and critics.\n",
"\n",
" Parameters:\n",
" - n_iters (int, optional): The number of \"improvement\" iterations to run. Defaults to 2.\n",
" - **kwargs: keyword arguments for the parent AssistantAgent.\n",
" \"\"\"\n",
" super().__init__(**kwargs)\n",
" self.register_reply([Agent, None], reply_func=FigureCreator._reply_user, position=0)\n",
" self._n_iters = n_iters\n",
"\n",
" def _reply_user(self, messages=None, sender=None, config=None):\n",
" if all((messages is None, sender is None)):\n",
" error_msg = f\"Either {messages=} or {sender=} must be provided.\"\n",
" logger.error(error_msg) # noqa: F821\n",
" raise AssertionError(error_msg)\n",
"\n",
" if messages is None:\n",
" messages = self._oai_messages[sender]\n",
"\n",
" user_question = messages[-1][\"content\"]\n",
"\n",
" ### Define the agents\n",
" commander = AssistantAgent(\n",
" name=\"Commander\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" system_message=\"Help me run the code, and tell other agents it is in the <img result.jpg> file location.\",\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \".\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" llm_config=self.llm_config,\n",
" )\n",
"\n",
" critics = LLaVAAgent(\n",
" name=\"Critics\",\n",
" system_message=\"\"\"Criticize the input figure. How to replot the figure so it will be better? Find bugs and issues for the figure.\n",
" Pay attention to the color, format, and presentation. Keep in mind of the reader-friendliness.\n",
" If you think the figures is good enough, then simply say NO_ISSUES\"\"\",\n",
" llm_config={\"config_list\": llava_config_list},\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=1,\n",
" # use_docker=False,\n",
" )\n",
"\n",
" coder = AssistantAgent(\n",
" name=\"Coder\",\n",
" llm_config=self.llm_config,\n",
" )\n",
"\n",
" coder.update_system_message(\n",
" coder.system_message\n",
" + \"ALWAYS save the figure in `result.jpg` file. Tell other agents it is in the <img result.jpg> file location.\"\n",
" )\n",
"\n",
" # Data flow begins\n",
" commander.initiate_chat(coder, message=user_question)\n",
" img = Image.open(\"result.jpg\")\n",
" plt.imshow(img)\n",
" plt.axis(\"off\") # Hide the axes\n",
" plt.show()\n",
"\n",
" for i in range(self._n_iters):\n",
" commander.send(message=\"Improve <img result.jpg>\", recipient=critics, request_reply=True)\n",
"\n",
" feedback = commander._oai_messages[critics][-1][\"content\"]\n",
" if feedback.find(\"NO_ISSUES\") >= 0:\n",
" break\n",
" commander.send(\n",
" message=\"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\\n\"\n",
" + feedback,\n",
" recipient=coder,\n",
" request_reply=True,\n",
" )\n",
" img = Image.open(\"result.jpg\")\n",
" plt.imshow(img)\n",
" plt.axis(\"off\") # Hide the axes\n",
" plt.show()\n",
"\n",
" return True, \"result.jpg\""
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "977b9017",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser\u001b[0m (to Figure Creator~):\n",
"\n",
"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"First, we will download the CSV file, then we will parse it using pandas, a popular data analysis library in Python. After that, we will plot the data using matplotlib.\n",
"\n",
"This is how we could do this:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# Step 1: Load the Data\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"data = pd.read_csv(url)\n",
"\n",
"# Step 2: Parse the date to datetime format\n",
"data['date'] = pd.to_datetime(data['date'])\n",
"\n",
"# Step 3: Plot the Data\n",
"plt.figure(figsize=(10,6))\n",
"plt.plot(data['date'], data['temp_max'], label='Temp Max')\n",
"plt.plot(data['date'], data['temp_min'], label='Temp Min')\n",
"\n",
"plt.title('Seattle Weather')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Temperature (F)')\n",
"plt.legend()\n",
"plt.grid()\n",
"\n",
"# Save the figure\n",
"plt.savefig('result.jpg')\n",
"\n",
"# Display the plot\n",
"plt.show()\n",
"```\n",
"\n",
"When you run this code, it will load the data from the given URL, parse the 'date' column to datetime format, then plot the \"temp_max\" and \"temp_min\" over time. The resulting plot is then shown to you. The plot will automatically be saved as 'result.jpg' in the current directory. I will also submit these instructions to other agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(1000x600)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great! The code has successfully executed and the plot was generated and saved as `result.jpg`. \n",
"\n",
"If you check the working directory, you should find the figure saved as `result.jpg`.\n",
"\n",
"Let me know if you need help with anything else.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgMAAAE9CAYAAACWQ2EXAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9ebRl11Xfi3/mWnvvc25XnaqkKklWX7LlTjbuBdiAMeCAMU6CHSeExyMJbwSSN8zvB2QEXsYAxm+M8NKOxBBiTIwdetvYYIyNsWxjbMuN3Kix+lJTjapU3a2q251z9l5rzt8fa+1zT5VKcklWc2/d/dW4qnPvOWeffdZee8255vzO7xQzMzp06NChQ4cOGxbu2T6BDh06dOjQocOzi84Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02ODonIEOHTp06NBhg6NzBjp06NChQ4cNjs4Z6NChQ4cOHTY4OmegQ4cOHTp02OAonu0T6NChwyrM7DH/LiJPyfHb4zzW46cbInLW79n+/fHO5Zk6xw4dNho6Z6BDhzWCSUOoqqgqzjlUFREhxoj3HudOD+idaVxDCHjvAYgx4pwjxkhRFIQQEBGcc6cZVjNDVTGz8fHb50Vk/L72s9pjTL5n8tzbY7TfqX3uTGM+6Yy032Hyb8D4+595zh06dHjq0KUJOnRYQ5g0tiEEHnzwQW655Rbuu+++sVGEVUPZGnFg7EC0jxcWFsYGeGVlhRgjZsbhw4c5efLko4wtMDbwIQTuu+8+YowMh0P27t3LiRMnUFWWl5fZv38/o9HoNOehNeitY9Cey+LiIjFGVJVjx45x+PDh056PMT6mg2JmY8emQ4cOTx86Z6BDhzWEGCOQDOGHP/xh3vnOd3LTTTfxB3/wBzz88MOn7bRbY9oa0daR8N6zf/9+3vnOdxJCYDQa8eu//uvMz89TFAUf+tCH+PznP49zjhDC+PPa9zvnWFlZ4Vd/9VfZt28fTdPwS7/0S/ze7/0eZsbHP/5x/uRP/uQ0I90a88ljtQ7Nb/7mb3LnnXfiveeTn/wkH/zgB8evn4xotN+pjS60x22jEu25dujQ4alHlybo0GENod2ZA9x44438o3/0j3jd6143NrRN0/D+97+fO++8k507d/ITP/ETiAjvf//7OXz4MJdddhlvfetb+exnP8tf/uVfsry8zEte8hI+9alP0TQNP/iDPzg21sPhkI985CPccsstbNu2jZ/4iZ9g586dAMzNzXHttddy22238ZKXvIStW7dy4MABmqbha1/7Gt/7vd/LHXfcwZ//+Z9T1zU/9EM/xHd913fx2c9+ls9+9rOUZcmb3/xmNm3axF//9V9zzz338NrXvpbZ2VnuvPNOfuM3fgMR4Z//83/O3NwcH/nIR7j55pvZtGkTP/mTP0mMkd/7vd9j69atbNu2jX/8j/8xZVk+a9elQ4fzHV1koEOHNYof+7Ef413vehc/93M/x3ve8x6Wl5f5yEc+wgMPPMDP/uzPIiL86Z/+KVVV8YIXvICXv/zlfPnLX+aTn/wk119/PS984Qt5xzvewRve8AauuuoqfvZnf5bv+77vA1Jq4HOf+xw33XQTP/MzP8O2bdt43/veN45MOOd46Utfyq233sptt93GK1/5Sqampti7dy8PP/wwl1xyCf/pP/0n3vjGN/LjP/7j/O7v/i7z8/NceOGFvOY1r+GCCy7gP/7H/8iFF17Ii1/8Yn7iJ36Ct73tbagqp06d4u1vfzsAf/u3f8vNN9/MjTfeyM/8zM9wxRVX8D//5/9kaWmJv/3bv+VVr3oVP/IjP3Iap6BDhw5PPbrIQIcOawztzv0HfuAHeMUrXsGDDz7Iu9/9bmKMPPzww+zZs4ff/u3f5vjx41x88cXcfffdvO997+NlL3sZIsLBgwd5/vOfT7/fZ/PmzZgZRVGwdetWqqoa5+Vvv/12HnjgAd71rnexuLjI9PQ0TdNQFAVmxktf+lLe+973AvAjP/IjqCqf/vSnERGqquL222/nAx/4AADD4ZBDhw7xvve9j61btzI1NcX8/Pw4xD83N8fc3BwAr3jFK7jiiit4wQtewAMPPMDRo0fZu3cvv/3bv83y8jJFkZalK664gpe85CUURfGolEKHDh2eWnTOQIcOawhtztzM2LNnD1u2bGH37t1ceeWVLCwssHv3bmZnZ/n7f//vY2aUZcmXv/xlXvCCF/CWt7yFu+66a+xMrKyscPLkSebm5uj1eszPz7Np06bxZ1177bUcO3aMn/qpnwKgLEt6vd74+a1bt+Kc4xvf+AbveMc7CCHwjne8g7e85S1s2rSJF7zgBbz1rW9l27ZtrKyssGXLFg4cOMC//tf/mrvuuouPfvSjAPT7fRYWFhgMBhRFQVEU41RIURQ897nP5b777uOf/bN/NnZc2ooEEcF7P66K6NChw9ODzhk4B3Q7kg7PFCZLCW+//XZuvfVW6roe5/Q3bdrEH/3RH/E7v/M7TE9P88M//MO89rWv5bd+67f4b//tv3HRRRexa9cuLr30Unbv3s1//a//lZ/+6Z/mTW96E+9617t4wxvewM6dO9m2bRuvfOUrOXbsGO9+97vp9Xr8wA/8AFdeeeW46mBmZobXve51zM/PMzU1xVVXXcUVV1zBDTfcwK5du/i5n/s5PvzhDxNC4Iorrhh/zn/5L/+FXbt28YpXvALnHG9605v40Ic+xIEDB7j88svHkYmtW7cyGAy44YYbOHjwIP/rf/0vnHN8//d/P7t37+bqq68ej4f3flzG2KHDM4GNVsYq1t1dj8Jj1Tt36PBM48y5+Hh/P5tYz7kICz3WZzzW+TzeZ5zrax7r2Od6Hh06PJPYCHOycwbOgjPLnWBjTIYOHTp06HB2nO82oEsTfAuc7xOgQ4cOHTp06JyBx4CqMhgM+MpXvkLTNOO/P5au+rfCpLLcWg/GrIdzbHFmeHqtY72lntbj+K6H84RubJ9OfLtj237XXq/Ha17zGqqqeipPb02icwYeAyLCqVOn+LM/+zN+9Ed/dDw5Whb0402wMw3/wYMH2blz51m12dcCJs+3aRqOHz/Orl27AMa69msVIQSOHj06Hl9YWwvrmXPh4Ycf5qKLLqIsy9MkhdciWuni7du3j0sS2/m/FnDmgn/8+HH6/T6zs7On/X2t4uTJk4gIW7ZsWXNjC6fP3cFgwOLiItu3bx8rT661851EXdecOHGCiy66aPy3JzoXzIw/+ZM/4aUvfWnnDGxktDfBtddey/d+7/eO1c+ezOJy7733jpnRLdaiUwDpJtq3bx/XXHMNsPZ3A6rKvffey/Oe97zTGuSsNbTjeM8997B79+6zyvGuRdx3331cccUVZ22QtNZw4MABNm/ezOzs7GnSyGsVR44cQUTYvn07sHbXBBFhcXGR+fl5LrvssvHf1vLYNk3DQw89xO7du5/UutBu/L785S+PHbXzvUdG5wycBe1EaJrmUfXNT1QJbbIb3NmIiWsNk41uzuxetxYxaVDXauRlEmc284G1Pb6TfQbW8nnC6ffm5DivVUyO6Vof3zMjF2vZ8YbT76knO7Zn9us439E5A2dBK3RSluVpE2k4GrGwsESIEROHiWCAM8Xx6JBZewNJ2ePw/EmcGd6BxYiT7AyYrB5nosb8zPa1Z3aHa4//WGWQj8dRmHwOTr+xm6bBOcfRo0fZsmUL/X7/qRvYpwFnG5+1jslrtZb
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mCommander\u001b[0m (to Critics):\n",
"\n",
"Improve <image>\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[34mCriticize the input figure. How to replot the figure so it will be better? Find bugs and issues for the figure. \n",
" Pay attention to the color, format, and presentation. Keep in mind of the reader-friendliness.\n",
" If you think the figures is good enough, then simply say NO_ISSUES\n",
"###Human: Improve <image>\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mCritics\u001b[0m (to Commander):\n",
"\n",
"The input figure shows a graph of Seattle weather, with a blue line representing the temperature and an orange line representing the humidity. The graph is displayed on a white background, with the title \"Seattle Weather\" at the top.\n",
"\n",
"There are a few issues with the figure that could be improved:\n",
"\n",
"1. The color scheme for the temperature and humidity lines is not clear. The blue line represents the temperature, but it is not immediately clear to the viewer. A more distinct color or labeling could help clarify this.\n",
"2. The graph does not have any axis labels or units, making it difficult for the viewer to understand the scale and units of the temperature and humidity values.\n",
"3. The graph is not well-organized, with the temperature and humidity lines overlapping and not clearly separated. A more organized layout could help the viewer better understand the relationship between the two variables.\n",
"\n",
"To improve the figure, the following changes could be made:\n",
"\n",
"1. Use a more distinct color for the temperature line, such as red, and label it with a clear title, such as \"Temperature (°C)\".\n",
"2. Add axis labels for both the temperature and humidity lines, indicating the units and scale of the values.\n",
"3. Separate the temperature and humidity lines, either by using different colors or by adding a clear separation between them.\n",
"4. Consider adding a legend or key to help the viewer understand the meaning of the different colors and lines on the graph.\n",
"\n",
"By making these changes, the figure will be more reader-friendly and easier to understand.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\n",
"The input figure shows a graph of Seattle weather, with a blue line representing the temperature and an orange line representing the humidity. The graph is displayed on a white background, with the title \"Seattle Weather\" at the top.\n",
"\n",
"There are a few issues with the figure that could be improved:\n",
"\n",
"1. The color scheme for the temperature and humidity lines is not clear. The blue line represents the temperature, but it is not immediately clear to the viewer. A more distinct color or labeling could help clarify this.\n",
"2. The graph does not have any axis labels or units, making it difficult for the viewer to understand the scale and units of the temperature and humidity values.\n",
"3. The graph is not well-organized, with the temperature and humidity lines overlapping and not clearly separated. A more organized layout could help the viewer better understand the relationship between the two variables.\n",
"\n",
"To improve the figure, the following changes could be made:\n",
"\n",
"1. Use a more distinct color for the temperature line, such as red, and label it with a clear title, such as \"Temperature (°C)\".\n",
"2. Add axis labels for both the temperature and humidity lines, indicating the units and scale of the values.\n",
"3. Separate the temperature and humidity lines, either by using different colors or by adding a clear separation between them.\n",
"4. Consider adding a legend or key to help the viewer understand the meaning of the different colors and lines on the graph.\n",
"\n",
"By making these changes, the figure will be more reader-friendly and easier to understand.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Thank you for your feedback. I will indeed make the improvements accordingly. This time, each graph line will be labeled showing which indicates \"Temp Max\" and which indicates \"Temp Min\". I will also assign a red color to the line representing \"Temp Max\" and a blue color to the line representing \"Temp Min\". I will make sure the axes have the appropriate labels. \n",
"\n",
"Follow this code and it will improve your figure and saved as `result.jpg`:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# Step 1: Load the Data\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"data = pd.read_csv(url)\n",
"\n",
"# Step 2: Parse the date to datetime format\n",
"data['date'] = pd.to_datetime(data['date'])\n",
"\n",
"# Step 3: Plot the Data\n",
"plt.figure(figsize=(10,6))\n",
"plt.plot(data['date'], data['temp_max'], color='red', label='Temp Max')\n",
"plt.plot(data['date'], data['temp_min'], color='blue', label='Temp Min')\n",
"\n",
"plt.title('Seattle Weather')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Temperature (F)')\n",
"plt.legend()\n",
"plt.grid()\n",
"\n",
"# Save the figure\n",
"plt.savefig('result.jpg')\n",
"\n",
"# Display the plot\n",
"plt.show()\n",
"\n",
"```\n",
"\n",
"This code improves upon the previous one by adding distinct colors for the temperature max and min lines (red and blue, respectively), labeling each line, and providing proper axes labels. The result will be a plot that is more reader-friendly and easier to understand. The plot will automatically be saved as 'result.jpg' in the current directory. I will also submit these instructions to other agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(1000x600)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great, the code has been successfully executed and the updates have been made based on the feedback. \n",
"\n",
"You should now have a more reader-friendly plot, with clear distinction between maximum and minimum temperatures, and more evident axis labels. This updated figure is saved as `result.jpg` in your current directory.\n",
"\n",
"If you need further improvements or need assistance with something else, feel free to ask.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgMAAAE9CAYAAACWQ2EXAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9eZhlWVnni3/etdbe50RERk6VNRfURAFVzMggpeIEDq2odD+CtLSP17Z9fq3dt+n7U+6v2+7bat9ubW21na7SdoNccQAUFBFQBinEaopBqoCiqLkya8w5YzrDXmu97++PtfeJyKysIoEqKiJjf58nMiNOxNlnn3X2Xuu73vf7fl8xM6NHjx49evTosW3hnugT6NGjR48ePXo8sejJQI8ePXr06LHN0ZOBHj169OjRY5ujJwM9evTo0aPHNkdPBnr06NGjR49tjp4M9OjRo0ePHtscPRno0aNHjx49tjl6MtCjR48ePXpsc/RkoEePHj169Njm6MlAjx49evTosc3Rk4EePXr06NFjm6MnAz169OjRo8c2R08GevTo0aNHj22Ongz06NGjR48e2xw9GejRo0ePHj22OXoy0KNHjx49emxz9GSgR48ePXr02OboyUCPHj169OixzdGTgR49evTo0WOboycDPXr06NGjxzZHTwZ69OjRo0ePbY6eDPTo0aNHjx7bHD0Z6NGjR48ePbY5ejLQo0ePHj16bHP0ZKBHjx49evTY5ujJQI8ePXr06LHN0ZOBHj169OjRY5ujJwM9evTo0aPHNkdPBnr06NGjR49tjp4M9OjRo0ePHtscPRno0aNHjx49tjnCE30CPXr0WIeZPeLjIvKYHL87ziN9/3hDRE77PrvHH+1cvlrn2KPHdkNPBnr02CTYuBCqKqqKcw5VRUTIOeO9x7mTA3qnLq4pJbz3AOSccc6RcyaEQEoJEcE5d9LCamaoKmY2O373exGZPa97re4YG5+z8dy7Y3TvqfvdqYv5RjLSvYeNjwGz93/qOffo0eOxQ58m6NFjE2HjYptS4u677+bGG2/k9ttvny2KsL5Qdos4MCMQ3ffLy8uzBXg0GpFzxsw4ePAgJ06ceNhiC8wW+JQSt99+OzlnJpMJ+/fv5/jx46gqa2tr3HvvvUyn05PIQ7egd8SgO5eVlRVyzqgqR44c4eDBgyf9Puf8iATFzGbEpkePHo8fejLQo8cmQs4ZKAvhO9/5Tn7jN36D66+/nre85S3cf//9J+20u8W0W0Q7IuG959577+U3fuM3SCkxnU75uZ/7OY4dO0YIgXe84x189KMfxTlHSmn2et3znXOMRiN+5md+hgMHDhBj5PWvfz1vetObMDPe+9738sd//McnLdLdYr7xWB2h+c3f/E0+//nP473n/e9/P3/yJ38y+/uNEY3uPXXRhe64XVSiO9cePXo89ujTBD16bCJ0O3OAD3zgA/zAD/wA3/iN3zhbaGOMvO1tb+Pzn/88F1xwAa997WsREd72trdx8OBBnvzkJ/OqV72K6667jr/4i79gbW2N5z73uXzwgx8kxsi3f/u3zxbryWTCu971Lm688Ub27t3La1/7Wi644AIAFhcXeepTn8pnPvMZnvvc57Jnzx7uu+8+Yox86lOf4pu/+Zu5+eab+bM/+zOapuE7vuM7+Pqv/3quu+46rrvuOqqq4nu/93vZuXMn73vf+7j11lt56Utfyo4dO/j85z/PL/zCLyAi/OiP/iiLi4u8613v4hOf+AQ7d+7kh37oh8g586Y3vYk9e/awd+9e/vE//sdUVfWEfS49epzt6CMDPXpsUnzf930fb3jDG/iJn/gJ3vjGN7K2tsa73vUu7rrrLn78x38cEeGtb30rdV3zjGc8gxe84AXccMMNvP/97+c5z3kOz3zmM3nd617Hy1/+cq644gp+/Md/nG/5lm8BSmrgb//2b7n++uv5sR/7Mfbu3cub3/zmWWTCOcfznvc8brrpJj7zmc/wohe9iLm5Ofbv38/999/PxRdfzH/9r/+V7/zO7+T7v//7+R//439w7NgxzjvvPF7ykpdwzjnn8Eu/9Eucd955PPvZz+a1r30tr371q1FVlpaWeM1rXgPAhz/8YT7xiU/wgQ98gB/7sR/jsssu43d+53dYXV3lwx/+MC9+8Yv57u/+7pM0BT169Hjs0UcGevTYZOh27t/2bd/GC1/4Qu6++25+93d/l5wz999/P3fccQe//du/zdGjR7nooov4whe+wJvf/Ga+5mu+BhHhgQce4JprrmE4HLJr1y7MjBACe/bsoa7rWV7+s5/9LHfddRdveMMbWFlZYX5+nhgjIQTMjOc973n83u/9HgDf/d3fjaryoQ99CBGhrms++9nP8va3vx2AyWTCgw8+yJvf/Gb27NnD3Nwcx44dm4X4FxcXWVxcBOCFL3whl112Gc94xjO46667OHz4MPv37+e3f/u3WVtbI4QyLV122WU897nPJYTwsJRCjx49Hlv0ZKBHj02ELmduZtxxxx3s3r2bq666issvv5zl5WWuuuoqduzYwT/8h/8QM6OqKm644Qae8Yxn8MpXvpJbbrllRiZGoxEnTpxgcXGRwWDAsWPH2Llz5+y1nvrUp3LkyBF++Id/GICqqhgMBrPf79mzB+ccn/70p3nd615HSonXve51vPKVr2Tnzp084xnP4FWvehV79+5lNBqxe/du7rvvPv7lv/yX3HLLLbz73e8GYDgcsry8zHg8JoRACGGWCgkh8LSnPY3bb7+df/pP/+mMuHQVCSKC935WFdGjR4/HBz0ZOAP0O5IeXy1sLCX87Gc/y0033UTTNLOc/s6dO/nDP/xD/vt//+/Mz8/zXd/1Xbz0pS/lt37rt/i1X/s1zj//fC688EIuueQSrrrqKn71V3+VH/mRH+EVr3gFb3jDG3j5y1/OBRdcwN69e3nRi17EkSNH+N3f/V0GgwHf9m3fxuWXXz6rOlhYWOAbv/EbOXbsGHNzc1xxxRVcdtllXHvttVx44YX8xE/8BO985ztJKXHZZZfNXudXfuVXuPDCC3nhC1+Ic45XvOIVvOMd7+C+++7j0ksvnUUm9uzZw3g85tprr+WBBx7gf/7P/4lzjpe97GVcddVVXHnllbPx8N7Pyhh79PhqYLuVsYr1d9fD8Ej1zj16fLVx6rX4aI+fzqznTIyFHuk1Hul8Hu01zvRvHunYZ3oePXp8NbEdrsmeDJwGp5Y7wfa4GHr06NGjx+lxtq8BfZrgi+BsvwB69OjRo0ePngw8AlSV8XjMxz/+cWKMs8cfyVf9i2Gjs9xmD8ZshXPscGp4erNjq6WetuL4boXzhH5sH098pWPbvdfBYMBLXvIS6rp+LE9vU6InA48AEWFpaYk//dM/5Xu+53tmF0engn60C+zUhf+BBx7gggsuOK03+2bAxvONMXL06FEuvPBCgJmv/WZFSonDhw/Pxhc218R66rVw//33c/7551NV1UmWwpsRnXXxvn37ZiWJ3fW/GXDqhH/06FGGwyE7duw46fHNihMnTiAi7N69e9ONLZx87Y7HY1ZWVti3b9/MeXKzne9GNE3D8ePHOf/882ePfanXgpnxx3/8xzzvec/rycB2RncTPPWpT+Wbv/mbZ+5nX87kctttt82U0R02IymAchMdOHCApzzlKcDm3w2oKrfddhtPf/rTT2qQs9nQjeOtt97KVVdddVo73s2I22+/ncsuu+y0DZI2G+677z527drFjh07TrJG3qw4dOgQIsK+ffuAzTsniAgrKyscO3aMJz/5ybPHNvPYxhi55557uOqqq76seaHb+N1www0zona298joycBp0F0IMcaH1Td/qU5oG7vBnU6YuNmwsdHNqd3rNiM2LqibNfKyEac284HNPb4b+wxs5vOEk+/NjeO8WbFxTDf7+J4audjMxBtOvqe+3LE9tV/H2Y6eDJwGndFJVVUnXUjj6ZSVlWU0JRyGASpgIhgbLj7AAEEwU4Yelg4/RHIODYGkSsARMng1sivH2Vhjfmr72lO7w3Xn+UhlkI+mUdj4Ozj5xo4x4pzj8OHD7N69m+Fw+LiM8WO
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mCommander\u001b[0m (to Critics):\n",
"\n",
"Improve <image>\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\n",
"Improve <img result.jpg>\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"I apologize for the misunderstanding, but without more specific feedback about the figure, it's difficult to understand what improvements are needed. As of now, the figure includes distinct colors for max and min temperatures, clear labeling of lines, and proper axis labels.\n",
"\n",
"However, if you want to improve visual clarity, you can try:\n",
"\n",
"- Changing the line style (making one of them dotted, for instance).\n",
"- Adding a title to the legend for better clarification.\n",
"- Changing the figure size for a improved view.\n",
"\n",
"Here is the modified code:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# Step 1: Load the Data\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"data = pd.read_csv(url)\n",
"\n",
"# Step 2: Parse the date to datetime format\n",
"data['date'] = pd.to_datetime(data['date'])\n",
"\n",
"# Step 3: Plot the Data\n",
"plt.figure(figsize=(12,8))\n",
"plt.plot(data['date'], data['temp_max'], 'r-', label='Temp Max')\n",
"plt.plot(data['date'], data['temp_min'], 'b--', label='Temp Min')\n",
"\n",
"plt.title('Seattle Weather')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Temperature (F)')\n",
"plt.legend(title='Temperature')\n",
"\n",
"plt.grid()\n",
"\n",
"# Save the figure\n",
"plt.savefig('result.jpg')\n",
"\n",
"# Display the plot\n",
"plt.show()\n",
"\n",
"```\n",
"\n",
"This code changes the line style for the min temperature to dots for better distinction. Adjusts the figure size for a better representation and adds a title to the legend. The plot will automatically be saved as 'result.jpg' in the current directory. You are encouraged to run this code and I will also submit these instructions to other agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(1200x800)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great! The code has successfully executed and the plot was generated and saved as `result.jpg`. The figure now includes distinct colors for max and min temperatures, different line styles, and a clearer legend.\n",
"\n",
"You will find the figure saved as 'result.jpg' in your current directory.\n",
"\n",
"If you need any further improvements or other assistance, please let me know.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgMAAAFeCAYAAAAYIxzjAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9eZxlx3XfCX5PRNz73svM2quwFBZiIUSRFBeIskiCi2WJLcnzsayNskl5TKsluz1ujz0z3e7P+OPpz/T4I6nH7h61ZMuyJMuUZJmSSFlcJcikSJCgCBIEQVIgiZVAgQAKhULtWbm89+69EefMH3Hvy6xCYSUKlVl5v/gkMisz38v74sWNOHGW3xEzM3p6enp6enq2LO5CX0BPT09PT0/PhaU3Bnp6enp6erY4vTHQ09PT09OzxemNgZ6enp6eni1Obwz09PT09PRscXpjoKenp6enZ4vTGwM9PT09PT1bnN4Y6Onp6enp2eL0xkBPT09PT88WpzcGenp6enp6tji9MdDT09PT07PF6Y2Bnp6enp6eLU5vDPT09PT09GxxemOgp6enp6dni9MbAz09PT09PVuc3hjo6enp6enZ4vTGQE9PT09PzxanNwZ6enp6enq2OL0x0NPT09PTs8XpjYGenp6enp4tTm8M9PT09PT0bHF6Y6Cnp6enp2eL0xsDPT09PT09W5zeGOjp6enp6dni9MZAT09PT0/PFqc3Bnp6enp6erY4vTHQ09PT09OzxemNgZ6enp6eni1Obwz09PT09PRscXpjoKenp6enZ4vTGwM9PT09PT1bnN4Y6Onp6enp2eL0xkBPT09PT88WpzcGenp6enp6tji9MdDT09PT07PF6Y2Bnp6enp6eLU5vDPT09PT09GxxwoW+gJ6enmfHzDAzRGT27+7rZ3uciDzj7z/X53q+17ueZ7uGc13Li31NPT09T09vDPT0bBJUFcgbpnOOpmlwziEiOOeeYjB0G7CZoao452aPX7/Rrv9Z9/Pu8ev/Xvdc3c+eabNW1dnzdo8xM7z3s8eq6hlGQ/d3usf1xkBPz0tHHybo6dkkiAgpJb74xS/yH/7Df+B3fud3OH78+Bk/X7+pdxvyZDLhG9/4BjFGnnjiCb71rW+RUuK2227j8ccfP+PUvt5g+MxnPsO3vvUtqqriN37jNzh27BjLy8t89KMfpWkaUkqzv7V+U1dVjhw5wiOPPIKq8pnPfIYnn3wS7/3s97prW/+49YZDbwj09Ly09J6Bnp5NxMmTJ/m93/s9/vk//+fUdc1gMKCua77+9a9TVRXf/d3fDTDb/F/zmtfw0EMP8Uu/9Ev87M/+LA8++CAPPfQQ7373u/n617/O/Pw8u3fv5q677kJE+O7v/m6GwyFmxng85pZbbuEd73gHH//4x7n22mvZu3cv9957L29729v4+te/zs6dO3nNa17DqVOnuOeee9i2bRuvec1ruPnmm7nvvvv46Z/+ae68805WV1fZsWMHr3/969m2bRv3338/TzzxBK961avYvXs33/jGN2iahuuvv559+/adl9BFT0/P09N7Bnp6Nglmxmg0YjQa8b73vY9vfvOblGXJH/zBH/C1r32NRx55hP/4H/8j4/GYQ4cOcdddd/Gbv/mbFEVBWZbs2rWL4XDIwsIC27dvnz3nb/3Wb3HgwAHuuecePvCBD8zCEa985Ss5cOAA9957Lz/yIz/Cww8/zF/+5V/yyle+kl/5lV9haWmJT3ziE9x2222cOHGCI0eO8JGPfIRPf/rTFEXB/Pw8CwsLVFXFN7/5TQ4cOMAHP/hBvvKVr/BHf/RH1HXNr/zKr3Do0CF+4Rd+gcXFRcqy7EMEPT0XgN4Y6OnZJKgq8/Pz/PzP/zw//MM/zB133MH73vc+br/9dp588kkOHjzIE088wQMPPMDXvvY1Qgjcf//9LCwssH//fl772tdyxRVXcO2113LDDTcAEGPkc5/7HI8++iiHDx/miSeeALKRcNVVV3H69Gm+8pWvcNNNN3H48GG+8Y1vcNVVV3HnnXfyzW9+k+XlZR5//HFuvfVWHn/8cZxzPPzww1x++eVcd911vPzlL2cwGPCDP/iD/PW//td5/PHHueOOOzhy5Aj33nsvi4uLjMdjXv7yl/NDP/RD7Nmz54xQR09Pz0tDHybo6dkkOOdYWVnhjjvuYN++fezatQvvPTfeeCM7duzg2muvZTQa8c1vfpOrrrqKXbt2MZ1OGQwGrKyscODAAQaDAYcOHZpt+iLC93zP93DDDTdw+eWXs3PnztlmXBQFl19+Offeey/XXXcdRVFw5MgRrr/+em688UZuvPFGRqMR+/fv573vfS/veMc7+PznP09VVQyHw9nf8d4TQpj9vRtvvJGlpSVuuukmbrrpJrZv304IYZY82L3Wnp6elw6xs2uAep5SFtXTsxEwM+q65ktf+hJHjhxh586d3HTTTZgZt99+O8vLy7zqVa/i8ssv57Of/Szbtm3DOceb3/xmvvjFL3L69Gne8pa3cNttt7F79262b9/OpZdeymAw4Pbbb2cymXDjjTfyspe9bPY3H3jgAZ588kne9ra38cADD3Ds2DHe/va3c+TIEe644w6cc7zpTW9iPB7zla98ZWakXHfdddxyyy3s3LmThYUFrr76aobDIffddx+ve93ruOuuu3jsscfYv38/r3rVq7j33nv53u/93pkR0IcJejYaF/uc7I2Bc7C+FOpinwA9PT09Pefm7D3gYt4Pel/cM9CVXPX09PT0bF22wj7Q5ww8DV1p1ec///ktMRF6ep6OzWQUb6Zr3UxstnF9sa53OBzy1re+Fe/9i3BVG5veGDgHnWLbyZMnufvuu3nHO97xgt1DZsYTTzzBZZddNkui2sg3VYyRY8eOcdlll20Kl9h0OmVpaYlLLrnkQl/Ks2JmHD58mEsvvXTTLC7Hjh1jfn6e+fl5YGPP3WPHjrGwsMDc3Bywsa8VYGlpiZQSu3btutCX8ox0ypOHDx9m//79z7gurBew6pJBX2rMjBgjJ0+e5NJLL31Bz9EJfH3sYx/jzW9+86a5X78demPgHHRyqc45LrvsMl7zmtc878d3mBnz8/Ncc801myJDOqXEo48+yrXXXrtpjIHjx49zxRVXbPjr3SxzYf04Hjp0iJ07dzIajTbk+J59rZ2Wwka81vWYGadPnybGyN69ey/05Twn5ubmuP766591bDtD4EK+B3Vd8+STT3L11Ve/4OcwMz73uc+9iFe1semNgXOwXoMdvr2kkc6wWG8xb+SFqjOCNvp1ns1G3lzXs9nG9uw+B5thnDfDtZ79/m/0+bC+X8QzNbxaWVmhaRrgwr4mVSWlxOLi4nP6/fXXOhqNKMvyPF3ZxqU3Bs6BiOC9f1EWk7O11jf6Tb/Rr+9sNvqifzabrUKlG9/Ncs2b6Vo3y5rwXIkxMp1O2bZt21OMhq6PxbmMoPVr5HqdiXM10Drb63r286z/3mAwmIVmz8W5mnN1HpvOW7PRQ00vJr0xcA7O1W71+d6wzzSJNsPNvxkXqo18rZupRGkzzd3NdK3rebE8jy81z+QZSClRFAWDweApvzOdTqnrGsib8PpQzvqxKIpitqkXRTGL3a+P2Z+r9FtEaJpmZkA452ZiV2d37Oyu4elex3g8foohshXojYGenp6LGzPWbb0ItP9uT6QAm2hD3qg8k6Fwzz33cOutt3L//ffz5je/mTe96U3s27ePHTt2cOrUKZqmYfv27ayurrKwsIBzjtXVVVJK7Nu3j+l0yqlTp1hYWKAsS5aWllBVyrJkdXWVPXv2cNddd3H//ffzN//m38TMmJubY2VlZabA6ZwjhMDKygqXXnrpOUMBW8kTcDa9MdDT03PxY0ojQqFgImBG44xCDbbYCfB8sX4jPdsT9rrXvY5LL72U97///Vx33XV88pOfZDKZ8CM/8iP88i//Mm94wxt48MEHecUrXoGqcvXVV/PZz36WvXv38ta3vpWHH36Y8XjMqVOn+IEf+AF+67d+i7/1t/4WIsLDDz+MmbFnzx7uvvtuXvnKV/KFL3yBv/t3/y7vfe97ef3
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mFigure Creator~\u001b[0m (to User):\n",
"\n",
"result.jpg\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"config_list_gpt4 = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
" },\n",
")\n",
"\n",
2023-11-09 07:39:02 +08:00
"gpt4_llm_config = {\"config_list\": config_list_gpt4, \"cache_seed\": 42}\n",
"\n",
"# config_list_gpt35 = autogen.config_list_from_json(\n",
"# \"OAI_CONFIG_LIST\",\n",
"# filter_dict={\n",
"# \"model\": [\"gpt-35-turbo\", \"gpt-3.5-turbo\"],\n",
"# },\n",
"# )\n",
"\n",
2023-11-09 07:39:02 +08:00
"# gpt35_llm_config = {\"config_list\": config_list_gpt35, \"cache_seed\": 42}\n",
"\n",
"\n",
"creator = FigureCreator(name=\"Figure Creator~\", llm_config=gpt4_llm_config)\n",
"\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0, code_execution_config={\"use_docker\": False}\n",
") # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
"\n",
"user_proxy.initiate_chat(\n",
" creator,\n",
" message=\"\"\"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\"\"\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f0a58827",
"metadata": {},
"outputs": [],
"source": [
"if os.path.exists(\"result.jpg\"):\n",
" os.remove(\"result.jpg\") # clean up"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}