mirror of https://github.com/microsoft/autogen.git
1524 lines
62 KiB
Plaintext
1524 lines
62 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"slideshow": {
|
||
"slide_type": "slide"
|
||
}
|
||
},
|
||
"source": [
|
||
"# OptiGuide with Nested Chats in AutoGen \n",
|
||
"\n",
|
||
"This is a nested chat re-implementation of [OptiGuide](https://github.com/microsoft/OptiGuide), which is an LLM-based supply chain optimization framework. \n",
|
||
" \n",
|
||
" \n",
|
||
"In addition to AutoGen, this notebook also requires eventlet and Gurobipy. The eventlet package is used in this notebook to constrain code execution with a timeout, and the gurobipy package is a mathematical optimization software library for solving mixed-integer linear and quadratic optimization problems. \n",
|
||
"\n",
|
||
"````{=mdx}\n",
|
||
":::info Requirements\n",
|
||
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||
"\n",
|
||
"```bash\n",
|
||
"pip install pyautogen eventlet gurobipy\n",
|
||
"```\n",
|
||
"\n",
|
||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||
":::\n",
|
||
"````\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import re\n",
|
||
"from typing import Union\n",
|
||
"\n",
|
||
"# import auxiliary packages\n",
|
||
"import requests # for loading the example source code\n",
|
||
"from eventlet.timeout import Timeout\n",
|
||
"\n",
|
||
"# test Gurobi installation\n",
|
||
"from gurobipy import GRB\n",
|
||
"from termcolor import colored\n",
|
||
"\n",
|
||
"import autogen\n",
|
||
"from autogen.code_utils import extract_code\n",
|
||
"\n",
|
||
"config_list_gpt4 = autogen.config_list_from_json(\n",
|
||
" \"OAI_CONFIG_LIST\",\n",
|
||
" filter_dict={\n",
|
||
" \"model\": [\"gpt-4\", \"gpt4\", \"gpt-3.5-turbo\" \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
|
||
" },\n",
|
||
")\n",
|
||
"llm_config = {\"config_list\": config_list_gpt4}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"````{=mdx}\n",
|
||
":::tip\n",
|
||
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||
":::\n",
|
||
"````\n",
|
||
"\n",
|
||
"Intended agent orchestration in OptiGuide."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"![](optiGuide_new_design.png)\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"## Step 0. Prepare Helper Functions\n",
|
||
"The code cell below includes several helper functions to be used by agents. The helper functions are adopted directly from [OptiGuide](https://github.com/microsoft/OptiGuide).\n",
|
||
"\n",
|
||
"### Utility Functions\n",
|
||
"Several utility functions (replace, insert_code, run_with_exec) are defined to manipulate and execute the source code dynamically:\n",
|
||
"\n",
|
||
"- **replace(src_code, old_code, new_code) -> str:**<br/>Replaces a specified block of code within the source code with new code. This is essential for updating the code dynamically based on chatbot and user interactions.\n",
|
||
"\n",
|
||
"- **insert_code(src_code, new_lines) -> str:**<br/>Determines where to insert new lines of code based on specific markers in the source code. It's used to seamlessly integrate generated code snippets.\n",
|
||
"\n",
|
||
"- **run_with_exec(src_code) -> Union[str, Exception]:**<br/>Executes the modified source code within a controlled environment, capturing and returning the output or any errors that occur. This function is crucial for testing the feasibility and correctness of the generated code solutions.\n",
|
||
"\n",
|
||
"### Functions External Code Retrieval\n",
|
||
"The cell also includes logic to retrieve example source code from a remote repository using the requests library. This example code serves as a base for the chatbot to work with, allowing users to see real-life applications of the concepts discussed.\n",
|
||
"\n",
|
||
"The retrieved code is then displayed, showing both the beginning and the end of the source code to give a glimpse of its structure and content."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def replace(src_code: str, old_code: str, new_code: str) -> str:\n",
|
||
" \"\"\"\n",
|
||
" Inserts new code into the source code by replacing a specified old\n",
|
||
" code block.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" src_code (str): The source code to modify.\n",
|
||
" old_code (str): The code block to be replaced.\n",
|
||
" new_code (str): The new code block to insert.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" str: The modified source code with the new code inserted.\n",
|
||
"\n",
|
||
" Raises:\n",
|
||
" None\n",
|
||
"\n",
|
||
" Example:\n",
|
||
" src_code = 'def hello_world():\\n print(\"Hello, world!\")\\n\\n# Some\n",
|
||
" other code here'\n",
|
||
" old_code = 'print(\"Hello, world!\")'\n",
|
||
" new_code = 'print(\"Bonjour, monde!\")\\nprint(\"Hola, mundo!\")'\n",
|
||
" modified_code = _replace(src_code, old_code, new_code)\n",
|
||
" print(modified_code)\n",
|
||
" # Output:\n",
|
||
" # def hello_world():\n",
|
||
" # print(\"Bonjour, monde!\")\n",
|
||
" # print(\"Hola, mundo!\")\n",
|
||
" # Some other code here\n",
|
||
" \"\"\"\n",
|
||
" pattern = r\"( *){old_code}\".format(old_code=old_code)\n",
|
||
" head_spaces = re.search(pattern, src_code, flags=re.DOTALL).group(1)\n",
|
||
" new_code = \"\\n\".join([head_spaces + line for line in new_code.split(\"\\n\")])\n",
|
||
" rst = re.sub(pattern, new_code, src_code)\n",
|
||
" return rst\n",
|
||
"\n",
|
||
"\n",
|
||
"def insert_code(src_code: str, new_lines: str) -> str:\n",
|
||
" \"\"\"insert a code patch into the source code.\n",
|
||
"\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" src_code (str): the full source code\n",
|
||
" new_lines (str): The new code.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" str: the full source code after insertion (replacement).\n",
|
||
" \"\"\"\n",
|
||
" if new_lines.find(\"addConstr\") >= 0:\n",
|
||
" return replace(src_code, CONSTRAINT_CODE_STR, new_lines)\n",
|
||
" else:\n",
|
||
" return replace(src_code, DATA_CODE_STR, new_lines)\n",
|
||
"\n",
|
||
"\n",
|
||
"def run_with_exec(src_code: str) -> Union[str, Exception]:\n",
|
||
" \"\"\"Run the code snippet with exec.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" src_code (str): The source code to run.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" object: The result of the code snippet.\n",
|
||
" If the code succeed, returns the objective value (float or string).\n",
|
||
" else, return the error (exception)\n",
|
||
" \"\"\"\n",
|
||
" locals_dict = {}\n",
|
||
" locals_dict.update(globals())\n",
|
||
" locals_dict.update(locals())\n",
|
||
"\n",
|
||
" timeout = Timeout(\n",
|
||
" 60,\n",
|
||
" TimeoutError(\"This is a timeout exception, in case \" \"GPT's code falls into infinite loop.\"),\n",
|
||
" )\n",
|
||
" try:\n",
|
||
" exec(src_code, locals_dict, locals_dict)\n",
|
||
" except Exception as e:\n",
|
||
" return e\n",
|
||
" finally:\n",
|
||
" timeout.cancel()\n",
|
||
"\n",
|
||
" try:\n",
|
||
" status = locals_dict[\"m\"].Status\n",
|
||
" if status != GRB.OPTIMAL:\n",
|
||
" if status == GRB.UNBOUNDED:\n",
|
||
" ans = \"unbounded\"\n",
|
||
" elif status == GRB.INF_OR_UNBD:\n",
|
||
" ans = \"inf_or_unbound\"\n",
|
||
" elif status == GRB.INFEASIBLE:\n",
|
||
" ans = \"infeasible\"\n",
|
||
" m = locals_dict[\"m\"]\n",
|
||
" m.computeIIS()\n",
|
||
" constrs = [c.ConstrName for c in m.getConstrs() if c.IISConstr]\n",
|
||
" ans += \"\\nConflicting Constraints:\\n\" + str(constrs)\n",
|
||
" else:\n",
|
||
" ans = \"Model Status:\" + str(status)\n",
|
||
" else:\n",
|
||
" ans = \"Optimization problem solved. The objective value is: \" + str(locals_dict[\"m\"].objVal)\n",
|
||
" except Exception as e:\n",
|
||
" return e\n",
|
||
"\n",
|
||
" return ans"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Step 1. Agent Construction\n",
|
||
"\n",
|
||
"This cell introduces the Writer and OptiGuide agent classes and their instances to manage the interaction between the user, the chatbot, and the optimization solver. This streamlines the process of generating, evaluating, and integrating code solutions for supply chain optimization problems.\n",
|
||
"\n",
|
||
"### Classes Defined\n",
|
||
"\n",
|
||
"- **`OptiGuide`**: Inherits from `autogen.AssistantAgent` and serves as the main class for handling the supply chain optimization logic. It maintains state information like the source code, debugging attempts left, success status, and user chat history. Key methods include `set_success` and `update_debug_times`, which are used to update the agent's state based on the outcomes of interactions.\n",
|
||
"\n",
|
||
"- **`Writer`**: Also inherits from `autogen.AssistantAgent`, this class is tailored to manage the generation and explanation of Python code solutions. It keeps track of the source code and example Q&A to assist in generating responses to user queries.\n",
|
||
"\n",
|
||
"### Agent Instances\n",
|
||
"\n",
|
||
"- **`writer`**, **`safeguard`**, and **`optiguide_commander`**: Instances of the classes defined above, each configured with specific roles in the code generation and evaluation process. These agents work together to ensure that the user's questions are answered with safe, optimized, and understandable Python code solutions.\n",
|
||
"\n",
|
||
"- **`user`**: An instance of `autogen.UserProxyAgent`, representing the end-user interacting with the notebook. It's configured to simulate user inputs and responses, facilitating an interactive and engaging learning experience.\n",
|
||
"\n",
|
||
"#### Agent System Messages\n",
|
||
"Each agent is associted with a corresponding system message. Those system messages are adopted directly from [OptiGuide](https://github.com/microsoft/OptiGuide).\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"import time\n",
|
||
"\n",
|
||
"from gurobipy import GRB, Model\n",
|
||
"\n",
|
||
"# Example data\n",
|
||
"\n",
|
||
"capacity_in_supplier = {'supplier1': 150, 'supplier2': 50, 'supplier3': 100}\n",
|
||
"\n",
|
||
"shipping_cost_from_supplier_to_roastery = {\n",
|
||
" ('supplier1', 'roastery1'): 5,\n",
|
||
".\n",
|
||
".\n",
|
||
".\n",
|
||
"\n",
|
||
"# Solve\n",
|
||
"m.update()\n",
|
||
"model.optimize()\n",
|
||
"\n",
|
||
"print(time.ctime())\n",
|
||
"if m.status == GRB.OPTIMAL:\n",
|
||
" print(f'Optimal cost: {m.objVal}')\n",
|
||
"else:\n",
|
||
" print(\"Not solved to optimality. Optimization status:\", m.status)\n",
|
||
"\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# %% System Messages\n",
|
||
"WRITER_SYSTEM_MSG = \"\"\"You are a chatbot to:\n",
|
||
"(1) write Python code to answer users questions for supply chain-related coding\n",
|
||
"project;\n",
|
||
"(2) explain solutions from a Gurobi/Python solver.\n",
|
||
"\n",
|
||
"--- SOURCE CODE ---\n",
|
||
"{source_code}\n",
|
||
"\n",
|
||
"--- DOC STR ---\n",
|
||
"{doc_str}\n",
|
||
"---\n",
|
||
"\n",
|
||
"Here are some example questions and their answers and codes:\n",
|
||
"--- EXAMPLES ---\n",
|
||
"{example_qa}\n",
|
||
"---\n",
|
||
"\n",
|
||
"The execution result of the original source code is below.\n",
|
||
"--- Original Result ---\n",
|
||
"{execution_result}\n",
|
||
"\n",
|
||
"Note that your written code will be added to the lines with substring:\n",
|
||
"\"# OPTIGUIDE *** CODE GOES HERE\"\n",
|
||
"So, you don't need to write other code, such as m.optimize() or m.update().\n",
|
||
"You just need to write code snippet in ```python ...``` block.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"SAFEGUARD_SYSTEM_MSG = \"\"\"\n",
|
||
"Given the source code:\n",
|
||
"{source_code}\n",
|
||
"\n",
|
||
"Is the source code safe (not malicious code to break security\n",
|
||
"and privacy) to run?\n",
|
||
"Answer only one word.\n",
|
||
"If not safe, answer `DANGER`; else, answer `SAFE`.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"# %% Constant strings to match code lines in the source code.\n",
|
||
"DATA_CODE_STR = \"# OPTIGUIDE DATA CODE GOES HERE\"\n",
|
||
"CONSTRAINT_CODE_STR = \"# OPTIGUIDE CONSTRAINT CODE GOES HERE\"\n",
|
||
"\n",
|
||
"# In-context learning examples.\n",
|
||
"example_qa = \"\"\"\n",
|
||
"----------\n",
|
||
"Question: Why is it not recommended to use just one supplier for roastery 2?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"z = m.addVars(suppliers, vtype=GRB.BINARY, name=\"z\")\n",
|
||
"m.addConstr(sum(z[s] for s in suppliers) <= 1, \"_\")\n",
|
||
"for s in suppliers:\n",
|
||
" m.addConstr(x[s,'roastery2'] <= capacity_in_supplier[s] * z[s], \"_\")\n",
|
||
"```\n",
|
||
"\n",
|
||
"----------\n",
|
||
"Question: What if there's a 13% jump in the demand for light coffee at cafe1?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"light_coffee_needed_for_cafe[\"cafe1\"] = light_coffee_needed_for_cafe[\"cafe1\"] * (1 + 13/100)\n",
|
||
"```\n",
|
||
"\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"CODE_PROMPT = \"\"\"\n",
|
||
"Answer Code:\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"DEBUG_PROMPT = \"\"\"\n",
|
||
"\n",
|
||
"While running the code you suggested, I encountered the {error_type}:\n",
|
||
"--- ERROR MESSAGE ---\n",
|
||
"{error_message}\n",
|
||
"\n",
|
||
"Please try to resolve this bug, and rewrite the code snippet.\n",
|
||
"--- NEW CODE ---\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"SAFEGUARD_PROMPT = \"\"\"\n",
|
||
"--- Code ---\n",
|
||
"{code}\n",
|
||
"\n",
|
||
"--- One-Word Answer: SAFE or DANGER ---\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"INTERPRETER_PROMPT = \"\"\"Here are the execution results: {execution_rst}\n",
|
||
"\n",
|
||
"Can you organize these information to a human readable answer?\n",
|
||
"Remember to compare the new results to the original results you obtained in the\n",
|
||
"beginning.\n",
|
||
"\n",
|
||
"--- HUMAN READABLE ANSWER ---\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"# Get the source code of the coffee example from OptiGuide's official repo\n",
|
||
"code_url = \"https://raw.githubusercontent.com/microsoft/OptiGuide/main/benchmark/application/coffee.py\"\n",
|
||
"response = requests.get(code_url)\n",
|
||
"# Check if the request was successful\n",
|
||
"if response.status_code == 200:\n",
|
||
" # Get the text content from the response\n",
|
||
" code = response.text\n",
|
||
"else:\n",
|
||
" raise RuntimeError(\"Failed to retrieve the file.\")\n",
|
||
"# code = open(code_url, \"r\").read() # for local files\n",
|
||
"\n",
|
||
"\n",
|
||
"# show the first head and tail of the source code\n",
|
||
"print(\"\\n\".join(code.split(\"\\n\")[:10]))\n",
|
||
"print(\".\\n\" * 3)\n",
|
||
"print(\"\\n\".join(code.split(\"\\n\")[-10:]))\n",
|
||
"\n",
|
||
"writer_system_msg = WRITER_SYSTEM_MSG.format(\n",
|
||
" source_code=code,\n",
|
||
" doc_str=\"\",\n",
|
||
" example_qa=example_qa,\n",
|
||
" execution_result=\"\",\n",
|
||
")\n",
|
||
"safeguard_system_msg = SAFEGUARD_SYSTEM_MSG.format(source_code=code)\n",
|
||
"\n",
|
||
"\n",
|
||
"class OptiGuide(autogen.AssistantAgent):\n",
|
||
" source_code: str = code\n",
|
||
" debug_times: int = 3\n",
|
||
" debug_times_left: int = 3\n",
|
||
" example_qa: str = \"\"\n",
|
||
" success: bool = False\n",
|
||
" user_chat_history: str = \"\"\n",
|
||
"\n",
|
||
"\n",
|
||
"class Writer(autogen.AssistantAgent):\n",
|
||
" source_code: str = code\n",
|
||
" example_qa: str = \"\"\n",
|
||
" user_chat_history: str = \"\"\n",
|
||
"\n",
|
||
"\n",
|
||
"writer = Writer(\"writer\", llm_config=llm_config)\n",
|
||
"safeguard = autogen.AssistantAgent(\"safeguard\", llm_config=llm_config)\n",
|
||
"optiguide_commander = OptiGuide(\"commander\", llm_config=llm_config)\n",
|
||
"\n",
|
||
"user = autogen.UserProxyAgent(\n",
|
||
" \"user\", max_consecutive_auto_reply=0, human_input_mode=\"NEVER\", code_execution_config=False\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Step 2. Orchestrate Nested Chats \n",
|
||
"\n",
|
||
"These three agent instances are orchestrated in the following way with the `writer` and `safeguard` nested into the `commander` agent as the inner monologue.\n",
|
||
"\n",
|
||
"This next cell defines critical functions that manage the interactions between the `OptiGuide` system, the user, and the internal logic for processing and responding to queries. Each function plays a specific role in guiding the conversation, handling code generation requests, ensuring code safety, and summarizing outcomes. This ensures that agents receive clear instructions, immediate feedback, and a secure environment for exploring supply chain optimization problems through code.\n",
|
||
"\n",
|
||
"Information about the sequence of chats can be specified in the `chat_queue` argument of the `register_nested_chats` function. The following fields are especially useful:\n",
|
||
"- `recipient` (required) specifies the nested agent;\n",
|
||
"- `message` specifies what message to send to the nested recipient agent. In a sequence of nested chats, if the `message` field is not specified, we will use the last message the registering agent received as the initial message in the first chat and will skip any subsequent chat in the queue that does not have the `message` field. You can either provide a string or define a callable that returns a string.\n",
|
||
"- `summary_method` decides what to get out of the nested chat. You can either select from existing options including \"last_msg\" and \"reflection_with_llm\", or or define your own way on what to get from the nested chat with a Callable.\n",
|
||
"- `max_turns` determines how many turns of conversation to have between the concerned agent pairs."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 14,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def writer_init_messsage(recipient, messages, sender, config):\n",
|
||
" if recipient.success:\n",
|
||
" return None\n",
|
||
" msg_content = messages[-1].get(\"content\", \"\")\n",
|
||
" # board = config\n",
|
||
" # get execution result of the original source code\n",
|
||
" sender_history = recipient.chat_messages[sender]\n",
|
||
" user_chat_history = \"\\nHere are the history of discussions:\\n\" f\"{sender_history}\"\n",
|
||
"\n",
|
||
" if sender.name == \"user\":\n",
|
||
" execution_result = msg_content # TODO: get the execution result of the original source code\n",
|
||
" else:\n",
|
||
" execution_result = \"\"\n",
|
||
" writer_sys_msg = (\n",
|
||
" WRITER_SYSTEM_MSG.format(\n",
|
||
" source_code=recipient.source_code,\n",
|
||
" doc_str=\"\",\n",
|
||
" example_qa=example_qa,\n",
|
||
" execution_result=execution_result,\n",
|
||
" )\n",
|
||
" + user_chat_history\n",
|
||
" )\n",
|
||
"\n",
|
||
" # safeguard.reset() #TODO: reset safeguard\n",
|
||
" recipient.debug_times_left = recipient.debug_times\n",
|
||
" recipient.success = False\n",
|
||
" return writer_sys_msg + \"\\n\" + CODE_PROMPT\n",
|
||
"\n",
|
||
"\n",
|
||
"def writer_success_summary(recipient, sender):\n",
|
||
" if sender.success:\n",
|
||
" return sender.last_message(recipient)[\"content\"].replace(\"TERMINATE\", \"\")\n",
|
||
" else:\n",
|
||
" return \"Sorry. I cannot answer your question.\"\n",
|
||
"\n",
|
||
"\n",
|
||
"def safeguard_init_message(recipient, messages, sender, config):\n",
|
||
" if recipient.success:\n",
|
||
" return None\n",
|
||
" last_msg_content = messages[-1].get(\"content\", \"\")\n",
|
||
" _, code = extract_code(last_msg_content)[0]\n",
|
||
" if _ != \"unknown\":\n",
|
||
" return SAFEGUARD_SYSTEM_MSG.format(source_code=code) + sender.user_chat_history\n",
|
||
" else:\n",
|
||
" return\n",
|
||
" # return SAFEGUARD_SYSTEM_MSG.format(source_code=recipient.source_code)\n",
|
||
"\n",
|
||
"\n",
|
||
"def safeguard_summary(recipient, sender):\n",
|
||
" safe_msg = sender.last_message(recipient)[\"content\"].replace(\"TERMINATE\", \"\")\n",
|
||
"\n",
|
||
" if safe_msg.find(\"DANGER\") < 0:\n",
|
||
" # Step 4 and 5: Run the code and obtain the results\n",
|
||
" src_code = insert_code(sender.source_code, code)\n",
|
||
" execution_rst = run_with_exec(src_code)\n",
|
||
" print(colored(str(execution_rst), \"yellow\"))\n",
|
||
" if type(execution_rst) in [str, int, float]:\n",
|
||
" # we successfully run the code and get the result\n",
|
||
" sender.success = True\n",
|
||
" # Step 6: request to interpret results\n",
|
||
" return INTERPRETER_PROMPT.format(execution_rst=execution_rst)\n",
|
||
" else:\n",
|
||
" # DANGER: If not safe, try to debug. Redo coding\n",
|
||
" execution_rst = \"\"\"\n",
|
||
" Sorry, this new code is not safe to run. I would not allow you to execute it.\n",
|
||
" Please try to find a new way (coding) to answer the question.\"\"\"\n",
|
||
" if sender.debug_times_left > 0:\n",
|
||
" # Try to debug and write code again (back to step 2)\n",
|
||
" sender.debug_times_left -= 1\n",
|
||
" return DEBUG_PROMPT.format(error_type=type(execution_rst), error_message=str(execution_rst))\n",
|
||
"\n",
|
||
"\n",
|
||
"writer_chat_queue = [{\"recipient\": writer, \"message\": writer_init_messsage, \"summary_method\": writer_success_summary}]\n",
|
||
"safeguard_chat_queue = [\n",
|
||
" {\"recipient\": safeguard, \"message\": safeguard_init_message, \"max_turns\": 1, \"summary_method\": safeguard_summary}\n",
|
||
"]\n",
|
||
"# safeguard is triggered only when receiving a message from the writer\n",
|
||
"optiguide_commander.register_nested_chats(safeguard_chat_queue, trigger=\"writer\")\n",
|
||
"# writer is triggered only when receiving a message from the user\n",
|
||
"optiguide_commander.register_nested_chats(writer_chat_queue, trigger=\"user\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Let the agents talk"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33muser\u001b[0m (to commander):\n",
|
||
"\n",
|
||
"What if we prohibit shipping from supplier 1 to roastery 2?\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[34m\n",
|
||
"********************************************************************************\u001b[0m\n",
|
||
"\u001b[34mStart a new chat with the following message: \n",
|
||
"You are a chatbot to:\n",
|
||
"(1) write Python code to answer users questions for supply chain-related coding\n",
|
||
"project;\n",
|
||
"(2) explain solutions from a Gurobi/Python solver.\n",
|
||
"\n",
|
||
"--- SOURCE CODE ---\n",
|
||
"import time\n",
|
||
"\n",
|
||
"from gurobipy import GRB, Model\n",
|
||
"\n",
|
||
"# Example data\n",
|
||
"\n",
|
||
"capacity_in_supplier = {'supplier1': 150, 'supplier2': 50, 'supplier3': 100}\n",
|
||
"\n",
|
||
"shipping_cost_from_supplier_to_roastery = {\n",
|
||
" ('supplier1', 'roastery1'): 5,\n",
|
||
" ('supplier1', 'roastery2'): 4,\n",
|
||
" ('supplier2', 'roastery1'): 6,\n",
|
||
" ('supplier2', 'roastery2'): 3,\n",
|
||
" ('supplier3', 'roastery1'): 2,\n",
|
||
" ('supplier3', 'roastery2'): 7\n",
|
||
"}\n",
|
||
"\n",
|
||
"roasting_cost_light = {'roastery1': 3, 'roastery2': 5}\n",
|
||
"\n",
|
||
"roasting_cost_dark = {'roastery1': 5, 'roastery2': 6}\n",
|
||
"\n",
|
||
"shipping_cost_from_roastery_to_cafe = {\n",
|
||
" ('roastery1', 'cafe1'): 5,\n",
|
||
" ('roastery1', 'cafe2'): 3,\n",
|
||
" ('roastery1', 'cafe3'): 6,\n",
|
||
" ('roastery2', 'cafe1'): 4,\n",
|
||
" ('roastery2', 'cafe2'): 5,\n",
|
||
" ('roastery2', 'cafe3'): 2\n",
|
||
"}\n",
|
||
"\n",
|
||
"light_coffee_needed_for_cafe = {'cafe1': 20, 'cafe2': 30, 'cafe3': 40}\n",
|
||
"\n",
|
||
"dark_coffee_needed_for_cafe = {'cafe1': 20, 'cafe2': 20, 'cafe3': 100}\n",
|
||
"\n",
|
||
"cafes = list(set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys()))\n",
|
||
"roasteries = list(\n",
|
||
" set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys()))\n",
|
||
"suppliers = list(\n",
|
||
" set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()))\n",
|
||
"\n",
|
||
"# Create a new model\n",
|
||
"model = Model(\"coffee_distribution\")\n",
|
||
"\n",
|
||
"# OPTIGUIDE DATA CODE GOES HERE\n",
|
||
"\n",
|
||
"# Create variables\n",
|
||
"x = model.addVars(shipping_cost_from_supplier_to_roastery.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"x\")\n",
|
||
"y_light = model.addVars(shipping_cost_from_roastery_to_cafe.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"y_light\")\n",
|
||
"y_dark = model.addVars(shipping_cost_from_roastery_to_cafe.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"y_dark\")\n",
|
||
"\n",
|
||
"# Set objective\n",
|
||
"model.setObjective(\n",
|
||
" sum(x[i] * shipping_cost_from_supplier_to_roastery[i]\n",
|
||
" for i in shipping_cost_from_supplier_to_roastery.keys()) +\n",
|
||
" sum(roasting_cost_light[r] * y_light[r, c] +\n",
|
||
" roasting_cost_dark[r] * y_dark[r, c]\n",
|
||
" for r, c in shipping_cost_from_roastery_to_cafe.keys()) + sum(\n",
|
||
" (y_light[j] + y_dark[j]) * shipping_cost_from_roastery_to_cafe[j]\n",
|
||
" for j in shipping_cost_from_roastery_to_cafe.keys()), GRB.MINIMIZE)\n",
|
||
"\n",
|
||
"# Conservation of flow constraint\n",
|
||
"for r in set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(x[i] for i in shipping_cost_from_supplier_to_roastery.keys()\n",
|
||
" if i[1] == r) == sum(\n",
|
||
" y_light[j] + y_dark[j]\n",
|
||
" for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[0] == r), f\"flow_{r}\")\n",
|
||
"\n",
|
||
"# Add supply constraints\n",
|
||
"for s in set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(x[i] for i in shipping_cost_from_supplier_to_roastery.keys()\n",
|
||
" if i[0] == s) <= capacity_in_supplier[s], f\"supply_{s}\")\n",
|
||
"\n",
|
||
"# Add demand constraints\n",
|
||
"for c in set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(y_light[j] for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[1] == c) >= light_coffee_needed_for_cafe[c],\n",
|
||
" f\"light_demand_{c}\")\n",
|
||
" model.addConstr(\n",
|
||
" sum(y_dark[j] for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[1] == c) >= dark_coffee_needed_for_cafe[c],\n",
|
||
" f\"dark_demand_{c}\")\n",
|
||
"\n",
|
||
"# Optimize model\n",
|
||
"model.optimize()\n",
|
||
"m = model\n",
|
||
"\n",
|
||
"# OPTIGUIDE CONSTRAINT CODE GOES HERE\n",
|
||
"\n",
|
||
"# Solve\n",
|
||
"m.update()\n",
|
||
"model.optimize()\n",
|
||
"\n",
|
||
"print(time.ctime())\n",
|
||
"if m.status == GRB.OPTIMAL:\n",
|
||
" print(f'Optimal cost: {m.objVal}')\n",
|
||
"else:\n",
|
||
" print(\"Not solved to optimality. Optimization status:\", m.status)\n",
|
||
"\n",
|
||
"\n",
|
||
"--- DOC STR ---\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"Here are some example questions and their answers and codes:\n",
|
||
"--- EXAMPLES ---\n",
|
||
"\n",
|
||
"----------\n",
|
||
"Question: Why is it not recommended to use just one supplier for roastery 2?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"z = m.addVars(suppliers, vtype=GRB.BINARY, name=\"z\")\n",
|
||
"m.addConstr(sum(z[s] for s in suppliers) <= 1, \"_\")\n",
|
||
"for s in suppliers:\n",
|
||
" m.addConstr(x[s,'roastery2'] <= capacity_in_supplier[s] * z[s], \"_\")\n",
|
||
"```\n",
|
||
"\n",
|
||
"----------\n",
|
||
"Question: What if there's a 13% jump in the demand for light coffee at cafe1?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"light_coffee_needed_for_cafe[\"cafe1\"] = light_coffee_needed_for_cafe[\"cafe1\"] * (1 + 13/100)\n",
|
||
"```\n",
|
||
"\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"The execution result of the original source code is below.\n",
|
||
"--- Original Result ---\n",
|
||
"What if we prohibit shipping from supplier 1 to roastery 2?\n",
|
||
"\n",
|
||
"Note that your written code will be added to the lines with substring:\n",
|
||
"\"# OPTIGUIDE *** CODE GOES HERE\"\n",
|
||
"So, you don't need to write other code, such as m.optimize() or m.update().\n",
|
||
"You just need to write code snippet in ```python ...``` block.\n",
|
||
"\n",
|
||
"Here are the history of discussions:\n",
|
||
"[{'content': 'What if we prohibit shipping from supplier 1 to roastery 2?', 'role': 'user'}]\n",
|
||
"\n",
|
||
"Answer Code:\n",
|
||
"\n",
|
||
"\n",
|
||
"With the following carryover: \n",
|
||
"\u001b[0m\n",
|
||
"\u001b[34m\n",
|
||
"********************************************************************************\u001b[0m\n",
|
||
"\u001b[33mcommander\u001b[0m (to writer):\n",
|
||
"\n",
|
||
"You are a chatbot to:\n",
|
||
"(1) write Python code to answer users questions for supply chain-related coding\n",
|
||
"project;\n",
|
||
"(2) explain solutions from a Gurobi/Python solver.\n",
|
||
"\n",
|
||
"--- SOURCE CODE ---\n",
|
||
"import time\n",
|
||
"\n",
|
||
"from gurobipy import GRB, Model\n",
|
||
"\n",
|
||
"# Example data\n",
|
||
"\n",
|
||
"capacity_in_supplier = {'supplier1': 150, 'supplier2': 50, 'supplier3': 100}\n",
|
||
"\n",
|
||
"shipping_cost_from_supplier_to_roastery = {\n",
|
||
" ('supplier1', 'roastery1'): 5,\n",
|
||
" ('supplier1', 'roastery2'): 4,\n",
|
||
" ('supplier2', 'roastery1'): 6,\n",
|
||
" ('supplier2', 'roastery2'): 3,\n",
|
||
" ('supplier3', 'roastery1'): 2,\n",
|
||
" ('supplier3', 'roastery2'): 7\n",
|
||
"}\n",
|
||
"\n",
|
||
"roasting_cost_light = {'roastery1': 3, 'roastery2': 5}\n",
|
||
"\n",
|
||
"roasting_cost_dark = {'roastery1': 5, 'roastery2': 6}\n",
|
||
"\n",
|
||
"shipping_cost_from_roastery_to_cafe = {\n",
|
||
" ('roastery1', 'cafe1'): 5,\n",
|
||
" ('roastery1', 'cafe2'): 3,\n",
|
||
" ('roastery1', 'cafe3'): 6,\n",
|
||
" ('roastery2', 'cafe1'): 4,\n",
|
||
" ('roastery2', 'cafe2'): 5,\n",
|
||
" ('roastery2', 'cafe3'): 2\n",
|
||
"}\n",
|
||
"\n",
|
||
"light_coffee_needed_for_cafe = {'cafe1': 20, 'cafe2': 30, 'cafe3': 40}\n",
|
||
"\n",
|
||
"dark_coffee_needed_for_cafe = {'cafe1': 20, 'cafe2': 20, 'cafe3': 100}\n",
|
||
"\n",
|
||
"cafes = list(set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys()))\n",
|
||
"roasteries = list(\n",
|
||
" set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys()))\n",
|
||
"suppliers = list(\n",
|
||
" set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()))\n",
|
||
"\n",
|
||
"# Create a new model\n",
|
||
"model = Model(\"coffee_distribution\")\n",
|
||
"\n",
|
||
"# OPTIGUIDE DATA CODE GOES HERE\n",
|
||
"\n",
|
||
"# Create variables\n",
|
||
"x = model.addVars(shipping_cost_from_supplier_to_roastery.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"x\")\n",
|
||
"y_light = model.addVars(shipping_cost_from_roastery_to_cafe.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"y_light\")\n",
|
||
"y_dark = model.addVars(shipping_cost_from_roastery_to_cafe.keys(),\n",
|
||
" vtype=GRB.INTEGER,\n",
|
||
" name=\"y_dark\")\n",
|
||
"\n",
|
||
"# Set objective\n",
|
||
"model.setObjective(\n",
|
||
" sum(x[i] * shipping_cost_from_supplier_to_roastery[i]\n",
|
||
" for i in shipping_cost_from_supplier_to_roastery.keys()) +\n",
|
||
" sum(roasting_cost_light[r] * y_light[r, c] +\n",
|
||
" roasting_cost_dark[r] * y_dark[r, c]\n",
|
||
" for r, c in shipping_cost_from_roastery_to_cafe.keys()) + sum(\n",
|
||
" (y_light[j] + y_dark[j]) * shipping_cost_from_roastery_to_cafe[j]\n",
|
||
" for j in shipping_cost_from_roastery_to_cafe.keys()), GRB.MINIMIZE)\n",
|
||
"\n",
|
||
"# Conservation of flow constraint\n",
|
||
"for r in set(i[1] for i in shipping_cost_from_supplier_to_roastery.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(x[i] for i in shipping_cost_from_supplier_to_roastery.keys()\n",
|
||
" if i[1] == r) == sum(\n",
|
||
" y_light[j] + y_dark[j]\n",
|
||
" for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[0] == r), f\"flow_{r}\")\n",
|
||
"\n",
|
||
"# Add supply constraints\n",
|
||
"for s in set(i[0] for i in shipping_cost_from_supplier_to_roastery.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(x[i] for i in shipping_cost_from_supplier_to_roastery.keys()\n",
|
||
" if i[0] == s) <= capacity_in_supplier[s], f\"supply_{s}\")\n",
|
||
"\n",
|
||
"# Add demand constraints\n",
|
||
"for c in set(i[1] for i in shipping_cost_from_roastery_to_cafe.keys()):\n",
|
||
" model.addConstr(\n",
|
||
" sum(y_light[j] for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[1] == c) >= light_coffee_needed_for_cafe[c],\n",
|
||
" f\"light_demand_{c}\")\n",
|
||
" model.addConstr(\n",
|
||
" sum(y_dark[j] for j in shipping_cost_from_roastery_to_cafe.keys()\n",
|
||
" if j[1] == c) >= dark_coffee_needed_for_cafe[c],\n",
|
||
" f\"dark_demand_{c}\")\n",
|
||
"\n",
|
||
"# Optimize model\n",
|
||
"model.optimize()\n",
|
||
"m = model\n",
|
||
"\n",
|
||
"# OPTIGUIDE CONSTRAINT CODE GOES HERE\n",
|
||
"\n",
|
||
"# Solve\n",
|
||
"m.update()\n",
|
||
"model.optimize()\n",
|
||
"\n",
|
||
"print(time.ctime())\n",
|
||
"if m.status == GRB.OPTIMAL:\n",
|
||
" print(f'Optimal cost: {m.objVal}')\n",
|
||
"else:\n",
|
||
" print(\"Not solved to optimality. Optimization status:\", m.status)\n",
|
||
"\n",
|
||
"\n",
|
||
"--- DOC STR ---\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"Here are some example questions and their answers and codes:\n",
|
||
"--- EXAMPLES ---\n",
|
||
"\n",
|
||
"----------\n",
|
||
"Question: Why is it not recommended to use just one supplier for roastery 2?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"z = m.addVars(suppliers, vtype=GRB.BINARY, name=\"z\")\n",
|
||
"m.addConstr(sum(z[s] for s in suppliers) <= 1, \"_\")\n",
|
||
"for s in suppliers:\n",
|
||
" m.addConstr(x[s,'roastery2'] <= capacity_in_supplier[s] * z[s], \"_\")\n",
|
||
"```\n",
|
||
"\n",
|
||
"----------\n",
|
||
"Question: What if there's a 13% jump in the demand for light coffee at cafe1?\n",
|
||
"Answer Code:\n",
|
||
"```python\n",
|
||
"light_coffee_needed_for_cafe[\"cafe1\"] = light_coffee_needed_for_cafe[\"cafe1\"] * (1 + 13/100)\n",
|
||
"```\n",
|
||
"\n",
|
||
"\n",
|
||
"---\n",
|
||
"\n",
|
||
"The execution result of the original source code is below.\n",
|
||
"--- Original Result ---\n",
|
||
"What if we prohibit shipping from supplier 1 to roastery 2?\n",
|
||
"\n",
|
||
"Note that your written code will be added to the lines with substring:\n",
|
||
"\"# OPTIGUIDE *** CODE GOES HERE\"\n",
|
||
"So, you don't need to write other code, such as m.optimize() or m.update().\n",
|
||
"You just need to write code snippet in ```python ...``` block.\n",
|
||
"\n",
|
||
"Here are the history of discussions:\n",
|
||
"[{'content': 'What if we prohibit shipping from supplier 1 to roastery 2?', 'role': 'user'}]\n",
|
||
"\n",
|
||
"Answer Code:\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mwriter\u001b[0m (to commander):\n",
|
||
"\n",
|
||
"To answer the user's question regarding the prohibition of shipping from supplier 1 to roastery 2, we need to add a constraint to the optimization model that sets the shipping quantity `x` for the (`supplier1`, `roastery2`) route to zero. This will effectively disallow any shipping from supplier 1 to roastery 2. The code to do this should be placed in the section of the source code that is indicated by the comment \"# OPTIGUIDE DATA CODE GOES HERE\".\n",
|
||
"\n",
|
||
"Here's the Python code snippet to add this constraint:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"# filename: supply_chain_code.py\n",
|
||
"model.addConstr(x[('supplier1', 'roastery2')] == 0, \"prohibit_supplier1_roastery2\")\n",
|
||
"```\n",
|
||
"\n",
|
||
"Adding this constraint will prevent the optimizer from selecting any transport from `supplier1` to `roastery2`, which is what the prohibition is intended to achieve.\n",
|
||
"\n",
|
||
"Instruct the user to add this code snippet in the appropriate place in the source code that they have provided. After adding this code, the `m.optimize()` function will solve the modified version of the problem that includes the new constraint.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[34m\n",
|
||
"********************************************************************************\u001b[0m\n",
|
||
"\u001b[34mStart a new chat with the following message: \n",
|
||
"\n",
|
||
"Given the source code:\n",
|
||
"# filename: supply_chain_code.py\n",
|
||
"model.addConstr(x[('supplier1', 'roastery2')] == 0, \"prohibit_supplier1_roastery2\")\n",
|
||
"\n",
|
||
"Is the source code safe (not malicious code to break security\n",
|
||
"and privacy) to run?\n",
|
||
"Answer only one word.\n",
|
||
"If not safe, answer `DANGER`; else, answer `SAFE`.\n",
|
||
"\n",
|
||
"\n",
|
||
"With the following carryover: \n",
|
||
"\u001b[0m\n",
|
||
"\u001b[34m\n",
|
||
"********************************************************************************\u001b[0m\n",
|
||
"\u001b[33mcommander\u001b[0m (to safeguard):\n",
|
||
"\n",
|
||
"\n",
|
||
"Given the source code:\n",
|
||
"# filename: supply_chain_code.py\n",
|
||
"model.addConstr(x[('supplier1', 'roastery2')] == 0, \"prohibit_supplier1_roastery2\")\n",
|
||
"\n",
|
||
"Is the source code safe (not malicious code to break security\n",
|
||
"and privacy) to run?\n",
|
||
"Answer only one word.\n",
|
||
"If not safe, answer `DANGER`; else, answer `SAFE`.\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33msafeguard\u001b[0m (to commander):\n",
|
||
"\n",
|
||
"SAFE\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[arm] - Darwin 23.2.0 23C71)\n",
|
||
"\n",
|
||
"CPU model: Apple M3 Max\n",
|
||
"Thread count: 14 physical cores, 14 logical processors, using up to 14 threads\n",
|
||
"\n",
|
||
"Optimize a model with 11 rows, 18 columns and 36 nonzeros\n",
|
||
"Model fingerprint: 0x8aa2c280\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Coefficient statistics:\n",
|
||
" Matrix range [1e+00, 1e+00]\n",
|
||
" Objective range [2e+00, 1e+01]\n",
|
||
" Bounds range [0e+00, 0e+00]\n",
|
||
" RHS range [2e+01, 2e+02]\n",
|
||
"Found heuristic solution: objective 2900.0000000\n",
|
||
"Presolve time: 0.00s\n",
|
||
"Presolved: 11 rows, 18 columns, 36 nonzeros\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Found heuristic solution: objective 2896.0000000\n",
|
||
"\n",
|
||
"Root relaxation: objective 2.470000e+03, 11 iterations, 0.00 seconds (0.00 work units)\n",
|
||
"\n",
|
||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||
"\n",
|
||
"* 0 0 0 2470.0000000 2470.00000 0.00% - 0s\n",
|
||
"\n",
|
||
"Explored 1 nodes (11 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||
"Thread count was 14 (of 14 available processors)\n",
|
||
"\n",
|
||
"Solution count 3: 2470 2896 2900 \n",
|
||
"\n",
|
||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||
"Best objective 2.470000000000e+03, best bound 2.470000000000e+03, gap 0.0000%\n",
|
||
"Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[arm] - Darwin 23.2.0 23C71)\n",
|
||
"\n",
|
||
"CPU model: Apple M3 Max\n",
|
||
"Thread count: 14 physical cores, 14 logical processors, using up to 14 threads\n",
|
||
"\n",
|
||
"Optimize a model with 11 rows, 18 columns and 36 nonzeros\n",
|
||
"Model fingerprint: 0x8aa2c280\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Coefficient statistics:\n",
|
||
" Matrix range [1e+00, 1e+00]\n",
|
||
" Objective range [2e+00, 1e+01]\n",
|
||
" Bounds range [0e+00, 0e+00]\n",
|
||
" RHS range [2e+01, 2e+02]\n",
|
||
"Found heuristic solution: objective 2900.0000000\n",
|
||
"Presolve time: 0.00s\n",
|
||
"Presolved: 11 rows, 18 columns, 36 nonzeros\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Found heuristic solution: objective 2896.0000000\n",
|
||
"\n",
|
||
"Root relaxation: objective 2.470000e+03, 11 iterations, 0.00 seconds (0.00 work units)\n",
|
||
"\n",
|
||
" Nodes | Current Node | Objective Bounds | Work\n",
|
||
" Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n",
|
||
"\n",
|
||
"* 0 0 0 2470.0000000 2470.00000 0.00% - 0s\n",
|
||
"\n",
|
||
"Explored 1 nodes (11 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||
"Thread count was 14 (of 14 available processors)\n",
|
||
"\n",
|
||
"Solution count 3: 2470 2896 2900 \n",
|
||
"\n",
|
||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||
"Best objective 2.470000000000e+03, best bound 2.470000000000e+03, gap 0.0000%\n",
|
||
"Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[arm] - Darwin 23.2.0 23C71)\n",
|
||
"\n",
|
||
"CPU model: Apple M3 Max\n",
|
||
"Thread count: 14 physical cores, 14 logical processors, using up to 14 threads\n",
|
||
"\n",
|
||
"Optimize a model with 11 rows, 18 columns and 36 nonzeros\n",
|
||
"Model fingerprint: 0x8aa2c280\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Coefficient statistics:\n",
|
||
" Matrix range [1e+00, 1e+00]\n",
|
||
" Objective range [2e+00, 1e+01]\n",
|
||
" Bounds range [0e+00, 0e+00]\n",
|
||
" RHS range [2e+01, 2e+02]\n",
|
||
"Presolved: 11 rows, 18 columns, 36 nonzeros\n",
|
||
"\n",
|
||
"Continuing optimization...\n",
|
||
"\n",
|
||
"\n",
|
||
"Explored 1 nodes (11 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||
"Thread count was 14 (of 14 available processors)\n",
|
||
"\n",
|
||
"Solution count 3: 2470 2896 2900 \n",
|
||
"\n",
|
||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||
"Best objective 2.470000000000e+03, best bound 2.470000000000e+03, gap 0.0000%\n",
|
||
"Sun Feb 25 18:56:45 2024\n",
|
||
"Optimal cost: 2470.0\n",
|
||
"Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[arm] - Darwin 23.2.0 23C71)\n",
|
||
"\n",
|
||
"CPU model: Apple M3 Max\n",
|
||
"Thread count: 14 physical cores, 14 logical processors, using up to 14 threads\n",
|
||
"\n",
|
||
"Optimize a model with 11 rows, 18 columns and 36 nonzeros\n",
|
||
"Model fingerprint: 0x8aa2c280\n",
|
||
"Variable types: 0 continuous, 18 integer (0 binary)\n",
|
||
"Coefficient statistics:\n",
|
||
" Matrix range [1e+00, 1e+00]\n",
|
||
" Objective range [2e+00, 1e+01]\n",
|
||
" Bounds range [0e+00, 0e+00]\n",
|
||
" RHS range [2e+01, 2e+02]\n",
|
||
"Presolved: 11 rows, 18 columns, 36 nonzeros\n",
|
||
"\n",
|
||
"Continuing optimization...\n",
|
||
"\n",
|
||
"\n",
|
||
"Explored 1 nodes (11 simplex iterations) in 0.00 seconds (0.00 work units)\n",
|
||
"Thread count was 14 (of 14 available processors)\n",
|
||
"\n",
|
||
"Solution count 3: 2470 2896 2900 \n",
|
||
"\n",
|
||
"Optimal solution found (tolerance 1.00e-04)\n",
|
||
"Best objective 2.470000000000e+03, best bound 2.470000000000e+03, gap 0.0000%\n",
|
||
"Sun Feb 25 18:56:45 2024\n",
|
||
"Optimal cost: 2470.0\n",
|
||
"\u001b[33mOptimization problem solved. The objective value is: 2470.0\u001b[0m\n",
|
||
"\u001b[33mcommander\u001b[0m (to writer):\n",
|
||
"\n",
|
||
"Here are the execution results: Optimization problem solved. The objective value is: 2470.0\n",
|
||
"\n",
|
||
"Can you organize these information to a human readable answer?\n",
|
||
"Remember to compare the new results to the original results you obtained in the\n",
|
||
"beginning.\n",
|
||
"\n",
|
||
"--- HUMAN READABLE ANSWER ---\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mwriter\u001b[0m (to commander):\n",
|
||
"\n",
|
||
"Certainly! Here's a human-readable answer:\n",
|
||
"\n",
|
||
"After integrating the constraint to prohibit shipping from supplier 1 to roastery 2, the optimization problem was solved with a total cost of $2470.0.\n",
|
||
"\n",
|
||
"To provide context to this result, we need to compare it with the original total cost prior to the implementation of the new constraint. However, the original total cost was not provided in the information given to me. If the original total cost was known, we would compare the two to understand the financial impact of this constraint on the supply chain.\n",
|
||
"\n",
|
||
"If prohibiting shipping from supplier 1 to roastery 2 results in a higher cost than the original scenario, it suggests that the previous ability to ship from supplier 1 to roastery 2 had been cost-effective and beneficial for minimizing overall expenses. On the other hand, if the cost is the same or lower, then shipping from supplier 1 to roastery 2 may not have been crucial for the cost optimization of the supply chain, or other routes may have been able to compensate effectively for this restriction.\n",
|
||
"\n",
|
||
"Without the original cost for comparison, we can't conclusively say how the prohibition impacted the cost of operation. But the new total cost of $2470.0 will be the baseline for future changes or optimizations done following this new restriction.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mcommander\u001b[0m (to user):\n",
|
||
"\n",
|
||
"Certainly! Here's a human-readable answer:\n",
|
||
"\n",
|
||
"After integrating the constraint to prohibit shipping from supplier 1 to roastery 2, the optimization problem was solved with a total cost of $2470.0.\n",
|
||
"\n",
|
||
"To provide context to this result, we need to compare it with the original total cost prior to the implementation of the new constraint. However, the original total cost was not provided in the information given to me. If the original total cost was known, we would compare the two to understand the financial impact of this constraint on the supply chain.\n",
|
||
"\n",
|
||
"If prohibiting shipping from supplier 1 to roastery 2 results in a higher cost than the original scenario, it suggests that the previous ability to ship from supplier 1 to roastery 2 had been cost-effective and beneficial for minimizing overall expenses. On the other hand, if the cost is the same or lower, then shipping from supplier 1 to roastery 2 may not have been crucial for the cost optimization of the supply chain, or other routes may have been able to compensate effectively for this restriction.\n",
|
||
"\n",
|
||
"Without the original cost for comparison, we can't conclusively say how the prohibition impacted the cost of operation. But the new total cost of $2470.0 will be the baseline for future changes or optimizations done following this new restriction.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"chat_res = user.initiate_chat(\n",
|
||
" optiguide_commander, message=\"What if we prohibit shipping from supplier 1 to roastery 2?\"\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Get Final Results from the Returned ChatResult Object "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Certainly! Here's a human-readable answer:\n",
|
||
"\n",
|
||
"After integrating the constraint to prohibit shipping from supplier 1 to roastery 2, the optimization problem was solved with a total cost of $2470.0.\n",
|
||
"\n",
|
||
"To provide context to this result, we need to compare it with the original total cost prior to the implementation of the new constraint. However, the original total cost was not provided in the information given to me. If the original total cost was known, we would compare the two to understand the financial impact of this constraint on the supply chain.\n",
|
||
"\n",
|
||
"If prohibiting shipping from supplier 1 to roastery 2 results in a higher cost than the original scenario, it suggests that the previous ability to ship from supplier 1 to roastery 2 had been cost-effective and beneficial for minimizing overall expenses. On the other hand, if the cost is the same or lower, then shipping from supplier 1 to roastery 2 may not have been crucial for the cost optimization of the supply chain, or other routes may have been able to compensate effectively for this restriction.\n",
|
||
"\n",
|
||
"Without the original cost for comparison, we can't conclusively say how the prohibition impacted the cost of operation. But the new total cost of $2470.0 will be the baseline for future changes or optimizations done following this new restriction.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(chat_res.summary)"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"extra_files_to_copy": [
|
||
"optiGuide_new_design.png"
|
||
],
|
||
"front_matter": {
|
||
"description": "This is a nested chat re-implementation of OptiGuide which is an LLM-based supply chain optimization framework.",
|
||
"tags": [
|
||
"nested chat",
|
||
"hierarchical chat",
|
||
"orchestration"
|
||
]
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.13"
|
||
},
|
||
"vscode": {
|
||
"interpreter": {
|
||
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
||
}
|
||
},
|
||
"widgets": {
|
||
"application/vnd.jupyter.widget-state+json": {
|
||
"state": {
|
||
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"454146d0f7224f038689031002906e6f": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HBoxModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HBoxModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HBoxView",
|
||
"box_style": "",
|
||
"children": [
|
||
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
|
||
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
|
||
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
|
||
],
|
||
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
|
||
"tabbable": null,
|
||
"tooltip": null
|
||
}
|
||
},
|
||
"577e1e3cc4db4942b0883577b3b52755": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "FloatProgressModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "FloatProgressModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "ProgressView",
|
||
"bar_style": "success",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
|
||
"max": 1,
|
||
"min": 0,
|
||
"orientation": "horizontal",
|
||
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": 1
|
||
}
|
||
},
|
||
"6086462a12d54bafa59d3c4566f06cb2": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"74a6ba0c3cbc4051be0a83e152fe1e62": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "ProgressStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "ProgressStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"bar_color": null,
|
||
"description_width": ""
|
||
}
|
||
},
|
||
"7d3f3d9e15894d05a4d188ff4f466554": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"background": null,
|
||
"description_width": "",
|
||
"font_size": null,
|
||
"text_color": null
|
||
}
|
||
},
|
||
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HTMLView",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
|
||
"placeholder": "",
|
||
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": " 1/1 [00:00<00:00, 44.69it/s]"
|
||
}
|
||
},
|
||
"ca245376fd9f4354af6b2befe4af4466": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"background": null,
|
||
"description_width": "",
|
||
"font_size": null,
|
||
"text_color": null
|
||
}
|
||
},
|
||
"dc83c7bff2f241309537a8119dfc7555": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HTMLView",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
|
||
"placeholder": "",
|
||
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": "100%"
|
||
}
|
||
},
|
||
"f1355871cc6f4dd4b50d9df5af20e5c8": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
}
|
||
},
|
||
"version_major": 2,
|
||
"version_minor": 0
|
||
}
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|