autogen/notebook/agentchat_custom_model.ipynb

843 lines
27 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Agent Chat with custom model loading\n",
"\n",
"In this notebook, we demonstrate how a custom model can be defined and loaded, and what protocol it needs to comply to.\n",
"\n",
"**NOTE: Depending on what model you use, you may need to play with the default prompts of the Agent's**\n",
"\n",
"## Requirements\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
"\n",
"```bash\n",
"pip install pyautogen torch transformers sentencepiece\n",
"```\n",
"\n",
"For more information, please refer to the [installation guide](/docs/installation/).\n",
":::\n",
"````"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from types import SimpleNamespace\n",
"\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig\n",
"\n",
"import autogen\n",
"from autogen import AssistantAgent, UserProxyAgent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create and configure the custom model\n",
"\n",
"A custom model class can be created in many ways, but needs to adhere to the `ModelClient` protocol and response structure which is defined in client.py and shown below.\n",
"\n",
"The response protocol has some minimum requirements, but can be extended to include any additional information that is needed.\n",
"Message retrieval therefore can be customized, but needs to return a list of strings or a list of `ModelClientResponseProtocol.Choice.Message` objects.\n",
"\n",
"\n",
"```python\n",
"class ModelClient(Protocol):\n",
" \"\"\"\n",
" A client class must implement the following methods:\n",
" - create must return a response object that implements the ModelClientResponseProtocol\n",
" - cost must return the cost of the response\n",
" - get_usage must return a dict with the following keys:\n",
" - prompt_tokens\n",
" - completion_tokens\n",
" - total_tokens\n",
" - cost\n",
" - model\n",
"\n",
" This class is used to create a client that can be used by OpenAIWrapper.\n",
" The response returned from create must adhere to the ModelClientResponseProtocol but can be extended however needed.\n",
" The message_retrieval method must be implemented to return a list of str or a list of messages from the response.\n",
" \"\"\"\n",
"\n",
" RESPONSE_USAGE_KEYS = [\"prompt_tokens\", \"completion_tokens\", \"total_tokens\", \"cost\", \"model\"]\n",
"\n",
" class ModelClientResponseProtocol(Protocol):\n",
" class Choice(Protocol):\n",
" class Message(Protocol):\n",
" content: Optional[str]\n",
"\n",
" message: Message\n",
"\n",
" choices: List[Choice]\n",
" model: str\n",
"\n",
" def create(self, params) -> ModelClientResponseProtocol:\n",
" ...\n",
"\n",
" def message_retrieval(\n",
" self, response: ModelClientResponseProtocol\n",
" ) -> Union[List[str], List[ModelClient.ModelClientResponseProtocol.Choice.Message]]:\n",
" \"\"\"\n",
" Retrieve and return a list of strings or a list of Choice.Message from the response.\n",
"\n",
" NOTE: if a list of Choice.Message is returned, it currently needs to contain the fields of OpenAI's ChatCompletion Message object,\n",
" since that is expected for function or tool calling in the rest of the codebase at the moment, unless a custom agent is being used.\n",
" \"\"\"\n",
" ...\n",
"\n",
" def cost(self, response: ModelClientResponseProtocol) -> float:\n",
" ...\n",
"\n",
" @staticmethod\n",
" def get_usage(response: ModelClientResponseProtocol) -> Dict:\n",
" \"\"\"Return usage summary of the response using RESPONSE_USAGE_KEYS.\"\"\"\n",
" ...\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example of simple custom client\n",
"\n",
"Following the huggingface example for using [Mistral's Open-Orca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)\n",
"\n",
"For the response object, python's `SimpleNamespace` is used to create a simple object that can be used to store the response data, but any object that follows the `ClientResponseProtocol` can be used.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# custom client with custom model loader\n",
"\n",
"\n",
"class CustomModelClient:\n",
" def __init__(self, config, **kwargs):\n",
" print(f\"CustomModelClient config: {config}\")\n",
" self.device = config.get(\"device\", \"cpu\")\n",
" self.model = AutoModelForCausalLM.from_pretrained(config[\"model\"]).to(self.device)\n",
" self.model_name = config[\"model\"]\n",
" self.tokenizer = AutoTokenizer.from_pretrained(config[\"model\"], use_fast=False)\n",
" self.tokenizer.pad_token_id = self.tokenizer.eos_token_id\n",
"\n",
" # params are set by the user and consumed by the user since they are providing a custom model\n",
" # so anything can be done here\n",
" gen_config_params = config.get(\"params\", {})\n",
" self.max_length = gen_config_params.get(\"max_length\", 256)\n",
"\n",
" print(f\"Loaded model {config['model']} to {self.device}\")\n",
"\n",
" def create(self, params):\n",
" if params.get(\"stream\", False) and \"messages\" in params:\n",
" raise NotImplementedError(\"Local models do not support streaming.\")\n",
" else:\n",
" num_of_responses = params.get(\"n\", 1)\n",
"\n",
" # can create my own data response class\n",
" # here using SimpleNamespace for simplicity\n",
" # as long as it adheres to the ClientResponseProtocol\n",
"\n",
" response = SimpleNamespace()\n",
"\n",
" inputs = self.tokenizer.apply_chat_template(\n",
" params[\"messages\"], return_tensors=\"pt\", add_generation_prompt=True\n",
" ).to(self.device)\n",
" inputs_length = inputs.shape[-1]\n",
"\n",
" # add inputs_length to max_length\n",
" max_length = self.max_length + inputs_length\n",
" generation_config = GenerationConfig(\n",
" max_length=max_length,\n",
" eos_token_id=self.tokenizer.eos_token_id,\n",
" pad_token_id=self.tokenizer.eos_token_id,\n",
" )\n",
"\n",
" response.choices = []\n",
" response.model = self.model_name\n",
"\n",
" for _ in range(num_of_responses):\n",
" outputs = self.model.generate(inputs, generation_config=generation_config)\n",
" # Decode only the newly generated text, excluding the prompt\n",
" text = self.tokenizer.decode(outputs[0, inputs_length:])\n",
" choice = SimpleNamespace()\n",
" choice.message = SimpleNamespace()\n",
" choice.message.content = text\n",
" choice.message.function_call = None\n",
" response.choices.append(choice)\n",
"\n",
" return response\n",
"\n",
" def message_retrieval(self, response):\n",
" \"\"\"Retrieve the messages from the response.\"\"\"\n",
" choices = response.choices\n",
" return [choice.message.content for choice in choices]\n",
"\n",
" def cost(self, response) -> float:\n",
" \"\"\"Calculate the cost of the response.\"\"\"\n",
" response.cost = 0\n",
" return 0\n",
"\n",
" @staticmethod\n",
" def get_usage(response):\n",
" # returns a dict of prompt_tokens, completion_tokens, total_tokens, cost, model\n",
" # if usage needs to be tracked, else None\n",
" return {}"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"\n",
"The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n",
"\n",
"The json looks like the following:\n",
"```json\n",
"[\n",
" {\n",
" \"model\": \"gpt-4\",\n",
" \"api_key\": \"<your OpenAI API key here>\"\n",
" },\n",
" {\n",
" \"model\": \"gpt-4\",\n",
" \"api_key\": \"<your Azure OpenAI API key here>\",\n",
" \"base_url\": \"<your Azure OpenAI API base here>\",\n",
" \"api_type\": \"azure\",\n",
" \"api_version\": \"2024-02-01\"\n",
" },\n",
" {\n",
" \"model\": \"gpt-4-32k\",\n",
" \"api_key\": \"<your Azure OpenAI API key here>\",\n",
" \"base_url\": \"<your Azure OpenAI API base here>\",\n",
" \"api_type\": \"azure\",\n",
" \"api_version\": \"2024-02-01\"\n",
" }\n",
"]\n",
"```\n",
"\n",
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set the config for the custom model\n",
"\n",
"You can add any paramteres that are needed for the custom model loading in the same configuration list.\n",
"\n",
"It is important to add the `model_client_cls` field and set it to a string that corresponds to the class name: `\"CustomModelClient\"`.\n",
"\n",
"```json\n",
"{\n",
" \"model\": \"Open-Orca/Mistral-7B-OpenOrca\",\n",
" \"model_client_cls\": \"CustomModelClient\",\n",
" \"device\": \"cuda\",\n",
" \"n\": 1,\n",
" \"params\": {\n",
" \"max_length\": 1000,\n",
" }\n",
"},\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config_list_custom = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\"model_client_cls\": [\"CustomModelClient\"]},\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Construct Agents\n",
"\n",
"Consturct a simple conversation between a User proxy and an Assistent agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assistant = AssistantAgent(\"assistant\", llm_config={\"config_list\": config_list_custom})\n",
"user_proxy = UserProxyAgent(\n",
" \"user_proxy\",\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the custom client class to the assistant agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assistant.register_model_client(model_client_cls=CustomModelClient)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"user_proxy.initiate_chat(assistant, message=\"Write python code to print Hello World!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register a custom client class with a pre-loaded model\n",
"\n",
"If you want to have more control over when the model gets loaded, you can load the model yourself and pass it as an argument to the CustomClient during registration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# custom client with custom model loader\n",
"\n",
"\n",
"class CustomModelClientWithArguments(CustomModelClient):\n",
" def __init__(self, config, loaded_model, tokenizer, **kwargs):\n",
" print(f\"CustomModelClientWithArguments config: {config}\")\n",
"\n",
" self.model_name = config[\"model\"]\n",
" self.model = loaded_model\n",
" self.tokenizer = tokenizer\n",
"\n",
" self.device = config.get(\"device\", \"cpu\")\n",
"\n",
" gen_config_params = config.get(\"params\", {})\n",
" self.max_length = gen_config_params.get(\"max_length\", 256)\n",
" print(f\"Loaded model {config['model']} to {self.device}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load model here\n",
"\n",
"\n",
"config = config_list_custom[0]\n",
"device = config.get(\"device\", \"cpu\")\n",
"loaded_model = AutoModelForCausalLM.from_pretrained(config[\"model\"]).to(device)\n",
"tokenizer = AutoTokenizer.from_pretrained(config[\"model\"], use_fast=False)\n",
"tokenizer.pad_token_id = tokenizer.eos_token_id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add the config of the new custom model\n",
"\n",
"```json\n",
"{\n",
" \"model\": \"Open-Orca/Mistral-7B-OpenOrca\",\n",
" \"model_client_cls\": \"CustomModelClientWithArguments\",\n",
" \"device\": \"cuda\",\n",
" \"n\": 1,\n",
" \"params\": {\n",
" \"max_length\": 1000,\n",
" }\n",
"},\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config_list_custom = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\"model_client_cls\": [\"CustomModelClientWithArguments\"]},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assistant = AssistantAgent(\"assistant\", llm_config={\"config_list\": config_list_custom})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assistant.register_model_client(\n",
" model_client_cls=CustomModelClientWithArguments,\n",
" loaded_model=loaded_model,\n",
" tokenizer=tokenizer,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"user_proxy.initiate_chat(assistant, message=\"Write python code to print Hello World!\")"
]
}
],
"metadata": {
"front_matter": {
"description": "Define and laod a custom model",
"tags": [
"custom model"
]
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
},
"vscode": {
"interpreter": {
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
}
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"454146d0f7224f038689031002906e6f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
],
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
"tabbable": null,
"tooltip": null
}
},
"577e1e3cc4db4942b0883577b3b52755": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
"max": 1,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
"tabbable": null,
"tooltip": null,
"value": 1
}
},
"6086462a12d54bafa59d3c4566f06cb2": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"74a6ba0c3cbc4051be0a83e152fe1e62": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"7d3f3d9e15894d05a4d188ff4f466554": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
"placeholder": "",
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
"tabbable": null,
"tooltip": null,
"value": " 1/1 [00:00&lt;00:00, 44.69it/s]"
}
},
"ca245376fd9f4354af6b2befe4af4466": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"dc83c7bff2f241309537a8119dfc7555": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
"placeholder": "",
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
"tabbable": null,
"tooltip": null,
"value": "100%"
}
},
"f1355871cc6f4dd4b50d9df5af20e5c8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}