mirror of https://github.com/microsoft/autogen.git
Demo Notebook for Using Gemini with VertexAI (#3032)
* add notebook for using Gemini with VertexAI * add missing image * remove part with workload identity federation * Spelling * Capitalisation and tweak on config note. * autogen gemini gcp image * fix formatting * move gemini vertexai notebook to website/docs/topics/non-openai-models * Adjust license Co-authored-by: Chi Wang <wang.chi@microsoft.com> * remove auto-generated cell --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com>
This commit is contained in:
parent
82903f5f89
commit
f55a98f32b
|
@ -0,0 +1,796 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"slideshow": {
|
||||
"slide_type": "slide"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"# Use AutoGen with Gemini via VertexAI\n",
|
||||
"\n",
|
||||
"This notebook demonstrates how to use Autogen with Gemini via Vertex AI, which enables enhanced authentication method that also supports enterprise requirements using service accounts or even a personal Google cloud account.\n",
|
||||
"\n",
|
||||
"## Requirements\n",
|
||||
"\n",
|
||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install with the [gemini] option:\n",
|
||||
"```bash\n",
|
||||
"pip install \"pyautogen[gemini]\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"### Google Cloud Account\n",
|
||||
"To use VertexAI a Google Cloud account is needed. If you do not have one yet, just sign up for a free trial [here](https://cloud.google.com).\n",
|
||||
"\n",
|
||||
"Login to your account at [console.cloud.google.com](https://console.cloud.google.com)\n",
|
||||
"\n",
|
||||
"In the next step we create a Google Cloud project, which is needed for VertexAI. The official guide for creating a project is available is [here](https://developers.google.com/workspace/guides/create-project). \n",
|
||||
"\n",
|
||||
"We will name our project Autogen-with-Gemini.\n",
|
||||
"\n",
|
||||
"### Enable Google Cloud APIs\n",
|
||||
"\n",
|
||||
"If you wish to use Gemini with your personal account, then creating a Google Cloud account is enough. However, if a service account is needed, then a few extra steps are needed.\n",
|
||||
"\n",
|
||||
"#### Enable API for Gemini\n",
|
||||
" * For enabling Gemini for Google Cloud search for \"api\" and select Enabled APIs & services. \n",
|
||||
" * Then click ENABLE APIS AND SERVICES. \n",
|
||||
" * Search for Gemini, and select Gemini for Google Cloud. <br/> A direct link will look like this for our autogen-with-gemini project:\n",
|
||||
"https://console.cloud.google.com/apis/library/cloudaicompanion.googleapis.com?project=autogen-with-gemini&supportedpurview=project\n",
|
||||
"* Click ENABLE for Gemini for Google Cloud.\n",
|
||||
"\n",
|
||||
"### Enable API for Vertex AI\n",
|
||||
"* For enabling Vertex AI for Google Cloud search for \"api\" and select Enabled APIs & services. \n",
|
||||
"* Then click ENABLE APIS AND SERVICES. \n",
|
||||
"* Search for Vertex AI, and select Vertex AI API. <br/> A direct link for our autogen-with-gemini will be: https://console.cloud.google.com/apis/library/aiplatform.googleapis.com?project=autogen-with-gemini\n",
|
||||
"* Click ENABLE Vertex AI API for Google Cloud.\n",
|
||||
"\n",
|
||||
"### Create a Service Account\n",
|
||||
"\n",
|
||||
"You can find an overview of service accounts [can be found in the cloud console](https://console.cloud.google.com/iam-admin/serviceaccounts)\n",
|
||||
"\n",
|
||||
"Detailed guide: https://cloud.google.com/iam/docs/service-accounts-create\n",
|
||||
"\n",
|
||||
"A service account can be created within the scope of a project, so a project needs to be selected.\n",
|
||||
"\n",
|
||||
"<div>\n",
|
||||
"<img src=\"https://github.com/microsoft/autogen/blob/main/website/static/img/create_gcp_svc.png?raw=true\" width=\"1000\" />\n",
|
||||
"</div>\n",
|
||||
"\n",
|
||||
"For the sake of simplicity we will assign the Editor role to our service account for autogen on our Autogen-with-Gemini Google Cloud project.\n",
|
||||
"\n",
|
||||
"* Under IAM & Admin > Service Account select the newly created service accounts, and click the option \"Manage keys\" among the items. \n",
|
||||
"* From the \"ADD KEY\" dropdown select \"Create new key\" and select the JSON format and click CREATE.\n",
|
||||
" * The new key will be downloaded automatically. \n",
|
||||
"* You can then upload the service account key file to the from where you will be running autogen. \n",
|
||||
" * Please consider restricting the permissions on the key file. For example, you could run `chmod 600 autogen-with-gemini-service-account-key.json` if your keyfile is called autogen-with-gemini-service-account-key.json."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"execution": {
|
||||
"iopub.execute_input": "2023-02-13T23:40:52.317406Z",
|
||||
"iopub.status.busy": "2023-02-13T23:40:52.316561Z",
|
||||
"iopub.status.idle": "2023-02-13T23:40:52.321193Z",
|
||||
"shell.execute_reply": "2023-02-13T23:40:52.320628Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install \"pyautogen[gemini]\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {
|
||||
"execution": {
|
||||
"iopub.execute_input": "2023-02-13T23:40:54.634335Z",
|
||||
"iopub.status.busy": "2023-02-13T23:40:54.633929Z",
|
||||
"iopub.status.idle": "2023-02-13T23:40:56.105700Z",
|
||||
"shell.execute_reply": "2023-02-13T23:40:56.105085Z"
|
||||
},
|
||||
"slideshow": {
|
||||
"slide_type": "slide"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import autogen"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Configure Authentication\n",
|
||||
"\n",
|
||||
"Authentication happens using standard [Google Cloud authentication methods](https://cloud.google.com/docs/authentication), <br/> which means\n",
|
||||
"that either an already active session can be reused, or by specifying the Google application credentials of a service account.\n",
|
||||
"\n",
|
||||
"#### Use Service Account Keyfile\n",
|
||||
"\n",
|
||||
"The Google Cloud service account can be specified by setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path to the JSON key file of the service account. <br/>\n",
|
||||
"\n",
|
||||
"We could even just directly set the environment variable, or we can add the `\"google_application_credentials\"` key with the respective value for our model in the OAI_CONFIG_LIST.\n",
|
||||
"\n",
|
||||
"#### Use the Google Default Credentials\n",
|
||||
"\n",
|
||||
"If you are using [Cloud Shell](https://shell.cloud.google.com/cloudshell) or [Cloud Shell editor](https://shell.cloud.google.com/cloudshell/editor) in Google Cloud, <br/> then you are already authenticated. If you have the Google Cloud SDK installed locally, <br/> then you can login by running `gcloud auth login` in the command line. \n",
|
||||
"\n",
|
||||
"Detailed instructions for installing the Google Cloud SDK can be found [here](https://cloud.google.com/sdk/docs/install)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example Config List\n",
|
||||
"The config could look like the following (change `project_id` and `google_application_credentials`):\n",
|
||||
"```python\n",
|
||||
"config_list = [\n",
|
||||
" {\n",
|
||||
" \"model\": \"gemini-pro\",\n",
|
||||
" \"api_type\": \"google\",\n",
|
||||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||||
" \"location\": \"us-west1\"\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model\": \"gemini-1.5-pro-001\",\n",
|
||||
" \"api_type\": \"google\",\n",
|
||||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||||
" \"location\": \"us-west1\"\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model\": \"gemini-1.5-pro\",\n",
|
||||
" \"api_type\": \"google\",\n",
|
||||
" \"project\": \"autogen-with-gemini\",\n",
|
||||
" \"location\": \"us-west1\",\n",
|
||||
" \"google_application_credentials\": \"autogen-with-gemini-service-account-key.json\"\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model\": \"gemini-pro-vision\",\n",
|
||||
" \"api_type\": \"google\",\n",
|
||||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||||
" \"location\": \"us-west1\"\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"## Configure Safety Settings for VertexAI\n",
|
||||
"Configuring safety settings for VertexAI is slightly different, as we have to use the speicialized safety setting object types instead of plain strings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from vertexai.generative_models import (\n",
|
||||
" GenerationConfig,\n",
|
||||
" GenerativeModel,\n",
|
||||
" HarmBlockThreshold,\n",
|
||||
" HarmCategory,\n",
|
||||
" Part,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"safety_settings = {\n",
|
||||
" HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||||
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||||
" HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||||
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union\n",
|
||||
"\n",
|
||||
"import chromadb\n",
|
||||
"from PIL import Image\n",
|
||||
"from termcolor import colored\n",
|
||||
"\n",
|
||||
"from autogen import Agent, AssistantAgent, ConversableAgent, UserProxyAgent\n",
|
||||
"from autogen.agentchat.contrib.img_utils import _to_pil, get_image_data\n",
|
||||
"from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent\n",
|
||||
"from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent\n",
|
||||
"from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n",
|
||||
"from autogen.code_utils import DEFAULT_MODEL, UNKNOWN, content_str, execute_code, extract_code, infer_lang"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config_list_gemini = autogen.config_list_from_json(\n",
|
||||
" \"OAI_CONFIG_LIST\",\n",
|
||||
" filter_dict={\n",
|
||||
" \"model\": [\"gemini-1.5-pro\"],\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"config_list_gemini_vision = autogen.config_list_from_json(\n",
|
||||
" \"OAI_CONFIG_LIST\",\n",
|
||||
" filter_dict={\n",
|
||||
" \"model\": [\"gemini-pro-vision\"],\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for config_list in [config_list_gemini, config_list_gemini_vision]:\n",
|
||||
" for config_list_item in config_list:\n",
|
||||
" config_list_item[\"safety_settings\"] = safety_settings\n",
|
||||
"\n",
|
||||
"seed = 25 # for caching"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33muser_proxy\u001b[0m (to assistant):\n",
|
||||
"\n",
|
||||
"\n",
|
||||
" Compute the integral of the function f(x)=x^2 on the interval 0 to 1 using a Python script, \n",
|
||||
" which returns the value of the definite integral.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33massistant\u001b[0m (to user_proxy):\n",
|
||||
"\n",
|
||||
"Plan:\n",
|
||||
"1. (Code) Use Python's numerical integration library to compute the integral.\n",
|
||||
"2. (Language) Output the result.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# filename: integral.py\n",
|
||||
"import scipy.integrate\n",
|
||||
"\n",
|
||||
"f = lambda x: x**2\n",
|
||||
"result, error = scipy.integrate.quad(f, 0, 1)\n",
|
||||
"\n",
|
||||
"print(f\"The definite integral of x^2 from 0 to 1 is: {result}\")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Let me know when you have executed the code. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[31m\n",
|
||||
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
|
||||
"\u001b[33muser_proxy\u001b[0m (to assistant):\n",
|
||||
"\n",
|
||||
"exitcode: 0 (execution succeeded)\n",
|
||||
"Code output: \n",
|
||||
"The definite integral of x^2 from 0 to 1 is: 0.33333333333333337\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33massistant\u001b[0m (to user_proxy):\n",
|
||||
"\n",
|
||||
"The code executed successfully and returned the value of the definite integral as approximately 0.33333333333333337. \n",
|
||||
"\n",
|
||||
"This aligns with the analytical solution:\n",
|
||||
"\n",
|
||||
"The integral of x^2 is (x^3)/3. Evaluating this from 0 to 1 gives us (1^3)/3 - (0^3)/3 = 1/3 = 0.33333...\n",
|
||||
"\n",
|
||||
"Therefore, the answer is verified to be correct.\n",
|
||||
"\n",
|
||||
"TERMINATE\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"assistant = AssistantAgent(\n",
|
||||
" \"assistant\", llm_config={\"config_list\": config_list_gemini, \"seed\": seed}, max_consecutive_auto_reply=3\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"user_proxy = UserProxyAgent(\n",
|
||||
" \"user_proxy\",\n",
|
||||
" code_execution_config={\"work_dir\": \"coding\", \"use_docker\": False},\n",
|
||||
" human_input_mode=\"NEVER\",\n",
|
||||
" is_termination_msg=lambda x: content_str(x.get(\"content\")).find(\"TERMINATE\") >= 0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"result = user_proxy.initiate_chat(\n",
|
||||
" assistant,\n",
|
||||
" message=\"\"\"\n",
|
||||
" Compute the integral of the function f(x)=x^2 on the interval 0 to 1 using a Python script,\n",
|
||||
" which returns the value of the definite integral.\"\"\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example with Gemini Multimodal\n",
|
||||
"Authentication is the same for vision models as for the text based Gemini models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33muser_proxy\u001b[0m (to Gemini Vision):\n",
|
||||
"\n",
|
||||
"Describe what is in this image?\n",
|
||||
"<image>.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[31m\n",
|
||||
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
|
||||
"\u001b[33mGemini Vision\u001b[0m (to user_proxy):\n",
|
||||
"\n",
|
||||
" The image shows a taxonomy of different types of conversational agents. The taxonomy is based on two dimensions: agent customization and flexible conversation patterns. Agent customization refers to the ability of the agent to be tailored to the individual user. Flexible conversation patterns refer to the ability of the agent to engage in different types of conversations, such as joint chat and hierarchical chat.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"ChatResult(chat_id=None, chat_history=[{'content': 'Describe what is in this image?\\n<img https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png?raw=true>.', 'role': 'assistant'}, {'content': ' The image shows a taxonomy of different types of conversational agents. The taxonomy is based on two dimensions: agent customization and flexible conversation patterns. Agent customization refers to the ability of the agent to be tailored to the individual user. Flexible conversation patterns refer to the ability of the agent to engage in different types of conversations, such as joint chat and hierarchical chat.', 'role': 'user'}], summary=' The image shows a taxonomy of different types of conversational agents. The taxonomy is based on two dimensions: agent customization and flexible conversation patterns. Agent customization refers to the ability of the agent to be tailored to the individual user. Flexible conversation patterns refer to the ability of the agent to engage in different types of conversations, such as joint chat and hierarchical chat.', cost={'usage_including_cached_inference': {'total_cost': 0.0002385, 'gemini-pro-vision': {'cost': 0.0002385, 'prompt_tokens': 267, 'completion_tokens': 70, 'total_tokens': 337}}, 'usage_excluding_cached_inference': {'total_cost': 0.0002385, 'gemini-pro-vision': {'cost': 0.0002385, 'prompt_tokens': 267, 'completion_tokens': 70, 'total_tokens': 337}}}, human_input=[])"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"image_agent = MultimodalConversableAgent(\n",
|
||||
" \"Gemini Vision\", llm_config={\"config_list\": config_list_gemini_vision, \"seed\": seed}, max_consecutive_auto_reply=1\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"user_proxy = UserProxyAgent(\"user_proxy\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0)\n",
|
||||
"\n",
|
||||
"user_proxy.initiate_chat(\n",
|
||||
" image_agent,\n",
|
||||
" message=\"\"\"Describe what is in this image?\n",
|
||||
"<img https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png?raw=true>.\"\"\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"front_matter": {
|
||||
"description": "Using Gemini with AutoGen via VertexAI",
|
||||
"tags": [
|
||||
"gemini",
|
||||
"vertexai"
|
||||
]
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
||||
}
|
||||
},
|
||||
"widgets": {
|
||||
"application/vnd.jupyter.widget-state+json": {
|
||||
"state": {
|
||||
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border_bottom": null,
|
||||
"border_left": null,
|
||||
"border_right": null,
|
||||
"border_top": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"454146d0f7224f038689031002906e6f": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "HBoxModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "HBoxModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "HBoxView",
|
||||
"box_style": "",
|
||||
"children": [
|
||||
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
|
||||
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
|
||||
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
|
||||
],
|
||||
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
|
||||
"tabbable": null,
|
||||
"tooltip": null
|
||||
}
|
||||
},
|
||||
"577e1e3cc4db4942b0883577b3b52755": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "FloatProgressModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "FloatProgressModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "ProgressView",
|
||||
"bar_style": "success",
|
||||
"description": "",
|
||||
"description_allow_html": false,
|
||||
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
|
||||
"max": 1,
|
||||
"min": 0,
|
||||
"orientation": "horizontal",
|
||||
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
|
||||
"tabbable": null,
|
||||
"tooltip": null,
|
||||
"value": 1
|
||||
}
|
||||
},
|
||||
"6086462a12d54bafa59d3c4566f06cb2": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border_bottom": null,
|
||||
"border_left": null,
|
||||
"border_right": null,
|
||||
"border_top": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"74a6ba0c3cbc4051be0a83e152fe1e62": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "ProgressStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "ProgressStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "StyleView",
|
||||
"bar_color": null,
|
||||
"description_width": ""
|
||||
}
|
||||
},
|
||||
"7d3f3d9e15894d05a4d188ff4f466554": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "HTMLStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "HTMLStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "StyleView",
|
||||
"background": null,
|
||||
"description_width": "",
|
||||
"font_size": null,
|
||||
"text_color": null
|
||||
}
|
||||
},
|
||||
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "HTMLModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "HTMLModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "HTMLView",
|
||||
"description": "",
|
||||
"description_allow_html": false,
|
||||
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
|
||||
"placeholder": "",
|
||||
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
|
||||
"tabbable": null,
|
||||
"tooltip": null,
|
||||
"value": " 1/1 [00:00<00:00, 44.69it/s]"
|
||||
}
|
||||
},
|
||||
"ca245376fd9f4354af6b2befe4af4466": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "HTMLStyleModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "HTMLStyleModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "StyleView",
|
||||
"background": null,
|
||||
"description_width": "",
|
||||
"font_size": null,
|
||||
"text_color": null
|
||||
}
|
||||
},
|
||||
"dc83c7bff2f241309537a8119dfc7555": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border_bottom": null,
|
||||
"border_left": null,
|
||||
"border_right": null,
|
||||
"border_top": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
},
|
||||
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
|
||||
"model_module": "@jupyter-widgets/controls",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "HTMLModel",
|
||||
"state": {
|
||||
"_dom_classes": [],
|
||||
"_model_module": "@jupyter-widgets/controls",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "HTMLModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/controls",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "HTMLView",
|
||||
"description": "",
|
||||
"description_allow_html": false,
|
||||
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
|
||||
"placeholder": "",
|
||||
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
|
||||
"tabbable": null,
|
||||
"tooltip": null,
|
||||
"value": "100%"
|
||||
}
|
||||
},
|
||||
"f1355871cc6f4dd4b50d9df5af20e5c8": {
|
||||
"model_module": "@jupyter-widgets/base",
|
||||
"model_module_version": "2.0.0",
|
||||
"model_name": "LayoutModel",
|
||||
"state": {
|
||||
"_model_module": "@jupyter-widgets/base",
|
||||
"_model_module_version": "2.0.0",
|
||||
"_model_name": "LayoutModel",
|
||||
"_view_count": null,
|
||||
"_view_module": "@jupyter-widgets/base",
|
||||
"_view_module_version": "2.0.0",
|
||||
"_view_name": "LayoutView",
|
||||
"align_content": null,
|
||||
"align_items": null,
|
||||
"align_self": null,
|
||||
"border_bottom": null,
|
||||
"border_left": null,
|
||||
"border_right": null,
|
||||
"border_top": null,
|
||||
"bottom": null,
|
||||
"display": null,
|
||||
"flex": null,
|
||||
"flex_flow": null,
|
||||
"grid_area": null,
|
||||
"grid_auto_columns": null,
|
||||
"grid_auto_flow": null,
|
||||
"grid_auto_rows": null,
|
||||
"grid_column": null,
|
||||
"grid_gap": null,
|
||||
"grid_row": null,
|
||||
"grid_template_areas": null,
|
||||
"grid_template_columns": null,
|
||||
"grid_template_rows": null,
|
||||
"height": null,
|
||||
"justify_content": null,
|
||||
"justify_items": null,
|
||||
"left": null,
|
||||
"margin": null,
|
||||
"max_height": null,
|
||||
"max_width": null,
|
||||
"min_height": null,
|
||||
"min_width": null,
|
||||
"object_fit": null,
|
||||
"object_position": null,
|
||||
"order": null,
|
||||
"overflow": null,
|
||||
"padding": null,
|
||||
"right": null,
|
||||
"top": null,
|
||||
"visibility": null,
|
||||
"width": null
|
||||
}
|
||||
}
|
||||
},
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ff7aa39b25ffcaba97cfd1a87cb1b44d45d3814241c83122f84408e03ad575b0
|
||||
size 101583
|
Loading…
Reference in New Issue