mirror of https://github.com/microsoft/autogen.git
Compare commits
11 Commits
8d48a441fb
...
3638fb6b9c
Author | SHA1 | Date |
---|---|---|
Faisal Ilaiwi | 3638fb6b9c | |
Luke Hsiao | 5ad267707d | |
Eric Zhu | 610388945b | |
Rajan | 1c5baf020f | |
Wael Karkoub | 8a2a40d5d2 | |
Wael Karkoub | c345d41446 | |
Ryan Sweet | 1cdb871849 | |
Daniel Chalef | 76a4bd05d9 | |
Lokesh Goel | 1960eaba1a | |
kiyoung | 02977ee250 | |
Faisal Ilaiwi | 2a1e324c71 |
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2014 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -102,6 +102,9 @@ class MessageHistoryLimiter:
|
|||
if remaining_count == 0:
|
||||
break
|
||||
|
||||
if not transforms_util.is_tool_call_valid(truncated_messages):
|
||||
truncated_messages.pop()
|
||||
|
||||
return truncated_messages
|
||||
|
||||
def get_logs(self, pre_transform_messages: List[Dict], post_transform_messages: List[Dict]) -> Tuple[str, bool]:
|
||||
|
@ -229,6 +232,9 @@ class MessageTokenLimiter:
|
|||
processed_messages_tokens += msg_tokens
|
||||
processed_messages.insert(0, msg)
|
||||
|
||||
if not transforms_util.is_tool_call_valid(processed_messages):
|
||||
processed_messages.pop()
|
||||
|
||||
return processed_messages
|
||||
|
||||
def get_logs(self, pre_transform_messages: List[Dict], post_transform_messages: List[Dict]) -> Tuple[str, bool]:
|
||||
|
@ -319,7 +325,7 @@ class TextMessageCompressor:
|
|||
text_compressor: Optional[TextCompressor] = None,
|
||||
min_tokens: Optional[int] = None,
|
||||
compression_params: Dict = dict(),
|
||||
cache: Optional[AbstractCache] = Cache.disk(),
|
||||
cache: Optional[AbstractCache] = None,
|
||||
filter_dict: Optional[Dict] = None,
|
||||
exclude_filter: bool = True,
|
||||
):
|
||||
|
@ -391,6 +397,7 @@ class TextMessageCompressor:
|
|||
|
||||
cache_key = transforms_util.cache_key(message["content"], self._min_tokens)
|
||||
cached_content = transforms_util.cache_content_get(self._cache, cache_key)
|
||||
|
||||
if cached_content is not None:
|
||||
message["content"], savings = cached_content
|
||||
else:
|
||||
|
|
|
@ -112,3 +112,7 @@ def should_transform_message(message: Dict[str, Any], filter_dict: Optional[Dict
|
|||
return True
|
||||
|
||||
return len(filter_config([message], filter_dict, exclude)) > 0
|
||||
|
||||
|
||||
def is_tool_call_valid(messages: List[Dict[str, Any]]) -> bool:
|
||||
return messages[0].get("role") != "tool"
|
||||
|
|
|
@ -56,16 +56,7 @@ class CouchbaseVectorDB(VectorDB):
|
|||
wait_until_index_ready (float | None): Blocking call to wait until the database indexes are ready. None means no wait. Default is None.
|
||||
wait_until_document_ready (float | None): Blocking call to wait until the database documents are ready. None means no wait. Default is None.
|
||||
"""
|
||||
print(
|
||||
"CouchbaseVectorDB",
|
||||
connection_string,
|
||||
username,
|
||||
password,
|
||||
bucket_name,
|
||||
scope_name,
|
||||
collection_name,
|
||||
index_name,
|
||||
)
|
||||
|
||||
self.embedding_function = embedding_function
|
||||
self.index_name = index_name
|
||||
|
||||
|
@ -119,6 +110,7 @@ class CouchbaseVectorDB(VectorDB):
|
|||
try:
|
||||
collection_mgr = self.bucket.collections()
|
||||
collection_mgr.create_collection(self.scope.name, collection_name)
|
||||
self.cluster.query(f"CREATE PRIMARY INDEX ON {self.bucket.name}.{self.scope.name}.{collection_name}")
|
||||
|
||||
except Exception:
|
||||
if not get_or_create:
|
||||
|
@ -287,7 +279,12 @@ class CouchbaseVectorDB(VectorDB):
|
|||
[doc["content"]]
|
||||
).tolist() # Gets new embedding even in case of document update
|
||||
|
||||
doc_content = {TEXT_KEY: doc["content"], "metadata": doc.get("metadata", {}), EMBEDDING_KEY: embedding}
|
||||
doc_content = {
|
||||
TEXT_KEY: doc["content"],
|
||||
"metadata": doc.get("metadata", {}),
|
||||
EMBEDDING_KEY: embedding,
|
||||
"id": doc_id,
|
||||
}
|
||||
docs_to_upsert[doc_id] = doc_content
|
||||
collection.upsert_multi(docs_to_upsert)
|
||||
|
||||
|
|
|
@ -79,6 +79,7 @@ class ConversableAgent(LLMAgent):
|
|||
description: Optional[str] = None,
|
||||
chat_messages: Optional[Dict[Agent, List[Dict]]] = None,
|
||||
silent: Optional[bool] = None,
|
||||
role_for_system_message: Literal["system", "user"] = "system",
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
|
@ -143,7 +144,7 @@ class ConversableAgent(LLMAgent):
|
|||
else:
|
||||
self._oai_messages = chat_messages
|
||||
|
||||
self._oai_system_message = [{"content": system_message, "role": "system"}]
|
||||
self._oai_system_message = [{"content": system_message, "role": role_for_system_message}]
|
||||
self._description = description if description is not None else system_message
|
||||
self._is_termination_msg = (
|
||||
is_termination_msg
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
from .pod_commandline_code_executor import PodCommandLineCodeExecutor
|
||||
|
||||
__all__ = [
|
||||
"PodCommandLineCodeExecutor",
|
||||
]
|
|
@ -0,0 +1,323 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import importlib
|
||||
import sys
|
||||
import textwrap
|
||||
import uuid
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
from time import sleep
|
||||
from types import TracebackType
|
||||
from typing import Any, ClassVar, Dict, List, Optional, Type, Union
|
||||
|
||||
client = importlib.import_module("kubernetes.client")
|
||||
config = importlib.import_module("kubernetes.config")
|
||||
ApiException = importlib.import_module("kubernetes.client.rest").ApiException
|
||||
stream = importlib.import_module("kubernetes.stream").stream
|
||||
|
||||
from ...code_utils import TIMEOUT_MSG, _cmd
|
||||
from ..base import CodeBlock, CodeExecutor, CodeExtractor, CommandLineCodeResult
|
||||
from ..markdown_code_extractor import MarkdownCodeExtractor
|
||||
from ..utils import _get_file_name_from_content, silence_pip
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
from typing import Self
|
||||
else:
|
||||
from typing_extensions import Self
|
||||
|
||||
|
||||
class PodCommandLineCodeExecutor(CodeExecutor):
|
||||
DEFAULT_EXECUTION_POLICY: ClassVar[Dict[str, bool]] = {
|
||||
"bash": True,
|
||||
"shell": True,
|
||||
"sh": True,
|
||||
"pwsh": False,
|
||||
"powershell": False,
|
||||
"ps1": False,
|
||||
"python": True,
|
||||
"javascript": False,
|
||||
"html": False,
|
||||
"css": False,
|
||||
}
|
||||
LANGUAGE_ALIASES: ClassVar[Dict[str, str]] = {
|
||||
"py": "python",
|
||||
"js": "javascript",
|
||||
}
|
||||
LANGUAGE_FILE_EXTENSION: ClassVar[Dict[str, str]] = {
|
||||
"python": "py",
|
||||
"javascript": "js",
|
||||
"bash": "sh",
|
||||
"shell": "sh",
|
||||
"sh": "sh",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
image: str = "python:3-slim",
|
||||
pod_name: Optional[str] = None,
|
||||
namespace: Optional[str] = None,
|
||||
pod_spec: Optional[client.V1Pod] = None, # type: ignore
|
||||
container_name: Optional[str] = "autogen-code-exec",
|
||||
timeout: int = 60,
|
||||
work_dir: Union[Path, str] = Path("/workspace"),
|
||||
kube_config_file: Optional[str] = None,
|
||||
stop_container: bool = True,
|
||||
execution_policies: Optional[Dict[str, bool]] = None,
|
||||
):
|
||||
"""(Experimental) A code executor class that executes code through
|
||||
a command line environment in a kubernetes pod.
|
||||
|
||||
The executor first saves each code block in a file in the working
|
||||
directory, and then executes the code file in the container.
|
||||
The executor executes the code blocks in the order they are received.
|
||||
Currently, the executor only supports Python and shell scripts.
|
||||
For Python code, use the language "python" for the code block.
|
||||
For shell scripts, use the language "bash", "shell", or "sh" for the code
|
||||
block.
|
||||
|
||||
Args:
|
||||
image (_type_, optional): Docker image to use for code execution.
|
||||
Defaults to "python:3-slim".
|
||||
pod_name (Optional[str], optional): Name of the kubernetes pod
|
||||
which is created. If None, will autogenerate a name. Defaults to None.
|
||||
namespace (Optional[str], optional): Namespace of kubernetes pod
|
||||
which is created. If None, will use current namespace of this instance
|
||||
pod_spec (Optional[client.V1Pod], optional): Specification of kubernetes pod.
|
||||
custom pod spec can be provided with this param.
|
||||
if pod_spec is provided, params above(image, pod_name, namespace) are neglected.
|
||||
container_name (Optional[str], optional): Name of the container where code block will be
|
||||
executed. if pod_spec param is provided, container_name must be provided also.
|
||||
timeout (int, optional): The timeout for code execution. Defaults to 60.
|
||||
work_dir (Union[Path, str], optional): The working directory for the code
|
||||
execution. Defaults to Path("/workspace").
|
||||
kube_config_file (Optional[str], optional): kubernetes configuration file path.
|
||||
If None, will use KUBECONFIG environment variables or service account token(incluster config)
|
||||
stop_container (bool, optional): If true, will automatically stop the
|
||||
container when stop is called, when the context manager exits or when
|
||||
the Python process exits with atext. Defaults to True.
|
||||
execution_policies (dict[str, bool], optional): defines supported execution language
|
||||
|
||||
Raises:
|
||||
ValueError: On argument error, or if the container fails to start.
|
||||
"""
|
||||
if kube_config_file is None:
|
||||
config.load_config()
|
||||
else:
|
||||
config.load_config(config_file=kube_config_file)
|
||||
|
||||
self._api_client = client.CoreV1Api()
|
||||
|
||||
if timeout < 1:
|
||||
raise ValueError("Timeout must be greater than or equal to 1.")
|
||||
self._timeout = timeout
|
||||
|
||||
if isinstance(work_dir, str):
|
||||
work_dir = Path(work_dir)
|
||||
self._work_dir: Path = work_dir
|
||||
|
||||
if container_name is None:
|
||||
container_name = "autogen-code-exec"
|
||||
self._container_name = container_name
|
||||
|
||||
# Start a container from the image, read to exec commands later
|
||||
if pod_spec:
|
||||
pod = pod_spec
|
||||
else:
|
||||
if pod_name is None:
|
||||
pod_name = f"autogen-code-exec-{uuid.uuid4()}"
|
||||
if namespace is None:
|
||||
namespace_path = "/var/run/secrets/kubernetes.io/serviceaccount/namespace"
|
||||
if not Path(namespace_path).is_file():
|
||||
raise ValueError("Namespace where the pod will be launched must be provided")
|
||||
with open(namespace_path, "r") as f:
|
||||
namespace = f.read()
|
||||
|
||||
pod = client.V1Pod(
|
||||
metadata=client.V1ObjectMeta(name=pod_name, namespace=namespace),
|
||||
spec=client.V1PodSpec(
|
||||
restart_policy="Never",
|
||||
containers=[
|
||||
client.V1Container(
|
||||
args=["-c", "while true;do sleep 5; done"],
|
||||
command=["/bin/sh"],
|
||||
name=container_name,
|
||||
image=image,
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
pod_name = pod.metadata.name
|
||||
namespace = pod.metadata.namespace
|
||||
self._pod = self._api_client.create_namespaced_pod(namespace=namespace, body=pod)
|
||||
except ApiException as e:
|
||||
raise ValueError(f"Creating pod failed: {e}")
|
||||
|
||||
self._wait_for_ready()
|
||||
|
||||
def cleanup() -> None:
|
||||
try:
|
||||
self._api_client.delete_namespaced_pod(pod_name, namespace)
|
||||
except ApiException:
|
||||
pass
|
||||
atexit.unregister(cleanup)
|
||||
|
||||
self._cleanup = cleanup
|
||||
|
||||
if stop_container:
|
||||
atexit.register(cleanup)
|
||||
|
||||
self.execution_policies = self.DEFAULT_EXECUTION_POLICY.copy()
|
||||
if execution_policies is not None:
|
||||
self.execution_policies.update(execution_policies)
|
||||
|
||||
def _wait_for_ready(self, stop_time: float = 0.1) -> None:
|
||||
elapsed_time = 0.0
|
||||
name = self._pod.metadata.name
|
||||
namespace = self._pod.metadata.namespace
|
||||
while True:
|
||||
sleep(stop_time)
|
||||
elapsed_time += stop_time
|
||||
if elapsed_time > self._timeout:
|
||||
raise ValueError(
|
||||
f"pod name {name} on namespace {namespace} is not Ready after timeout {self._timeout} seconds"
|
||||
)
|
||||
try:
|
||||
pod_status = self._api_client.read_namespaced_pod_status(name, namespace)
|
||||
if pod_status.status.phase == "Running":
|
||||
break
|
||||
except ApiException as e:
|
||||
raise ValueError(f"reading pod status failed: {e}")
|
||||
|
||||
@property
|
||||
def timeout(self) -> int:
|
||||
"""(Experimental) The timeout for code execution."""
|
||||
return self._timeout
|
||||
|
||||
@property
|
||||
def work_dir(self) -> Path:
|
||||
"""(Experimental) The working directory for the code execution."""
|
||||
return self._work_dir
|
||||
|
||||
@property
|
||||
def code_extractor(self) -> CodeExtractor:
|
||||
"""(Experimental) Export a code extractor that can be used by an agent."""
|
||||
return MarkdownCodeExtractor()
|
||||
|
||||
def execute_code_blocks(self, code_blocks: List[CodeBlock]) -> CommandLineCodeResult:
|
||||
"""(Experimental) Execute the code blocks and return the result.
|
||||
|
||||
Args:
|
||||
code_blocks (List[CodeBlock]): The code blocks to execute.
|
||||
|
||||
Returns:
|
||||
CommandlineCodeResult: The result of the code execution."""
|
||||
|
||||
if len(code_blocks) == 0:
|
||||
raise ValueError("No code blocks to execute.")
|
||||
|
||||
outputs = []
|
||||
files = []
|
||||
last_exit_code = 0
|
||||
for code_block in code_blocks:
|
||||
lang = self.LANGUAGE_ALIASES.get(code_block.language.lower(), code_block.language.lower())
|
||||
if lang not in self.DEFAULT_EXECUTION_POLICY:
|
||||
outputs.append(f"Unsupported language {lang}\n")
|
||||
last_exit_code = 1
|
||||
break
|
||||
|
||||
execute_code = self.execution_policies.get(lang, False)
|
||||
code = silence_pip(code_block.code, lang)
|
||||
if lang in ["bash", "shell", "sh"]:
|
||||
code = "\n".join(["#!/bin/bash", code])
|
||||
|
||||
try:
|
||||
filename = _get_file_name_from_content(code, self._work_dir)
|
||||
except ValueError:
|
||||
outputs.append("Filename is not in the workspace")
|
||||
last_exit_code = 1
|
||||
break
|
||||
|
||||
if not filename:
|
||||
extension = self.LANGUAGE_FILE_EXTENSION.get(lang, lang)
|
||||
filename = f"tmp_code_{md5(code.encode()).hexdigest()}.{extension}"
|
||||
|
||||
code_path = self._work_dir / filename
|
||||
|
||||
exec_script = textwrap.dedent(
|
||||
"""
|
||||
if [ ! -d "{workspace}" ]; then
|
||||
mkdir {workspace}
|
||||
fi
|
||||
cat <<EOM >{code_path}\n
|
||||
{code}
|
||||
EOM
|
||||
chmod +x {code_path}"""
|
||||
)
|
||||
exec_script = exec_script.format(workspace=str(self._work_dir), code_path=code_path, code=code)
|
||||
stream(
|
||||
self._api_client.connect_get_namespaced_pod_exec,
|
||||
self._pod.metadata.name,
|
||||
self._pod.metadata.namespace,
|
||||
command=["/bin/sh", "-c", exec_script],
|
||||
container=self._container_name,
|
||||
stderr=True,
|
||||
stdin=False,
|
||||
stdout=True,
|
||||
tty=False,
|
||||
)
|
||||
|
||||
files.append(code_path)
|
||||
|
||||
if not execute_code:
|
||||
outputs.append(f"Code saved to {str(code_path)}\n")
|
||||
continue
|
||||
|
||||
resp = stream(
|
||||
self._api_client.connect_get_namespaced_pod_exec,
|
||||
self._pod.metadata.name,
|
||||
self._pod.metadata.namespace,
|
||||
command=["timeout", str(self._timeout), _cmd(lang), str(code_path)],
|
||||
container=self._container_name,
|
||||
stderr=True,
|
||||
stdin=False,
|
||||
stdout=True,
|
||||
tty=False,
|
||||
_preload_content=False,
|
||||
)
|
||||
|
||||
stdout_messages = []
|
||||
stderr_messages = []
|
||||
while resp.is_open():
|
||||
resp.update(timeout=1)
|
||||
if resp.peek_stderr():
|
||||
stderr_messages.append(resp.read_stderr())
|
||||
if resp.peek_stdout():
|
||||
stdout_messages.append(resp.read_stdout())
|
||||
outputs.extend(stdout_messages + stderr_messages)
|
||||
exit_code = resp.returncode
|
||||
resp.close()
|
||||
|
||||
if exit_code == 124:
|
||||
outputs.append("\n" + TIMEOUT_MSG)
|
||||
|
||||
last_exit_code = exit_code
|
||||
if exit_code != 0:
|
||||
break
|
||||
|
||||
code_file = str(files[0]) if files else None
|
||||
return CommandLineCodeResult(exit_code=last_exit_code, output="".join(outputs), code_file=code_file)
|
||||
|
||||
def stop(self) -> None:
|
||||
"""(Experimental) Stop the code executor."""
|
||||
self._cleanup()
|
||||
|
||||
def __enter__(self) -> Self:
|
||||
return self
|
||||
|
||||
def __exit__(
|
||||
self, exc_type: Optional[Type[BaseException]], exc_val: Optional[BaseException], exc_tb: Optional[TracebackType]
|
||||
) -> None:
|
||||
self.stop()
|
|
@ -26,6 +26,12 @@ NON_CACHE_KEY = [
|
|||
DEFAULT_AZURE_API_VERSION = "2024-02-01"
|
||||
OAI_PRICE1K = {
|
||||
# https://openai.com/api/pricing/
|
||||
# o1-preview
|
||||
"o1-preview": (0.015, 0.060),
|
||||
"o1-preview-2024-09-12": (0.015, 0.060),
|
||||
# o1-mini
|
||||
"o1-mini": (0.003, 0.012),
|
||||
"o1-mini-2024-09-12": (0.003, 0.012),
|
||||
# gpt-4o
|
||||
"gpt-4o": (0.005, 0.015),
|
||||
"gpt-4o-2024-05-13": (0.005, 0.015),
|
||||
|
|
|
@ -1 +1 @@
|
|||
__version__ = "0.2.36"
|
||||
__version__ = "0.2.37"
|
||||
|
|
|
@ -0,0 +1,532 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Building an Agent with Long-term Memory using Autogen and Zep\n",
|
||||
"\n",
|
||||
"This notebook walks through how to build an Autogen Agent with long-term memory. Zep builds a knowledge graph from user interactions with the agent, enabling the agent to recall relevant facts from previous conversations or user interactions.\n",
|
||||
"\n",
|
||||
"In this notebook we will:\n",
|
||||
"- Create an Autogen Agent class that extends `ConversableAgent` by adding long-term memory\n",
|
||||
"- Create a Mental Health Assistant Agent, CareBot, that acts as a counselor and coach.\n",
|
||||
"- Create a user Agent, Cathy, who stands in for our expected user.\n",
|
||||
"- Demonstrate preloading chat history into Zep.\n",
|
||||
"- Demonstrate the agents in conversation, with CareBot recalling facts from previous conversations with Cathy.\n",
|
||||
"- Inspect Facts within Zep, and demonstrate how to use Zep's Fact Ratings to improve the quality of returned facts.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Requirements\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install autogen~=0.3 zep-cloud python-dotenv\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"flaml.automl is not available. Please install flaml[automl] to enable AutoML functionalities.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import uuid\n",
|
||||
"from typing import Dict, Union\n",
|
||||
"\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"\n",
|
||||
"from autogen import Agent, ConversableAgent\n",
|
||||
"\n",
|
||||
"load_dotenv()\n",
|
||||
"\n",
|
||||
"config_list = [\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-4o-mini\",\n",
|
||||
" \"api_key\": os.environ.get(\"OPENAI_API_KEY\"),\n",
|
||||
" \"max_tokens\": 1024,\n",
|
||||
" }\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## initiualize the Zep Client\n",
|
||||
"\n",
|
||||
"You can sign up for a Zep account here: https://www.getzep.com/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from zep_cloud import FactRatingExamples, FactRatingInstruction, Message\n",
|
||||
"from zep_cloud.client import AsyncZep\n",
|
||||
"\n",
|
||||
"MIN_FACT_RATING = 0.3\n",
|
||||
"\n",
|
||||
"# Configure Zep\n",
|
||||
"zep = AsyncZep(api_key=os.environ.get(\"ZEP_API_KEY\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def convert_to_zep_messages(chat_history: list[dict[str, str | None]]) -> list[Message]:\n",
|
||||
" \"\"\"\n",
|
||||
" Convert chat history to Zep messages.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" chat_history (list): List of dictionaries containing chat messages.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" list: List of Zep Message objects.\n",
|
||||
" \"\"\"\n",
|
||||
" return [\n",
|
||||
" Message(\n",
|
||||
" role_type=msg[\"role\"],\n",
|
||||
" role=msg.get(\"name\", None),\n",
|
||||
" content=msg[\"content\"],\n",
|
||||
" )\n",
|
||||
" for msg in chat_history\n",
|
||||
" ]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## ZepConversableAgent\n",
|
||||
"\n",
|
||||
"The `ZepConversableAgent` is a custom implementation of the `ConversableAgent` that integrates with Zep for long-term memory management. This class extends the functionality of the base `ConversableAgent` by adding Zep-specific features for persisting and retrieving facts from long-term memory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class ZepConversableAgent(ConversableAgent):\n",
|
||||
" \"\"\"\n",
|
||||
" A custom ConversableAgent that integrates with Zep for long-term memory.\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" def __init__(\n",
|
||||
" self,\n",
|
||||
" name: str,\n",
|
||||
" system_message: str,\n",
|
||||
" llm_config: dict,\n",
|
||||
" function_map: dict,\n",
|
||||
" human_input_mode: str,\n",
|
||||
" zep_session_id: str,\n",
|
||||
" ):\n",
|
||||
" super().__init__(\n",
|
||||
" name=name,\n",
|
||||
" system_message=system_message,\n",
|
||||
" llm_config=llm_config,\n",
|
||||
" function_map=function_map,\n",
|
||||
" human_input_mode=human_input_mode,\n",
|
||||
" )\n",
|
||||
" self.zep_session_id = zep_session_id\n",
|
||||
" # store the original system message as we will update it with relevant facts from Zep\n",
|
||||
" self.original_system_message = system_message\n",
|
||||
" self.register_hook(\"a_process_last_received_message\", self.persist_user_messages)\n",
|
||||
" self.register_hook(\"a_process_message_before_send\", self.persist_assistant_messages)\n",
|
||||
"\n",
|
||||
" async def persist_assistant_messages(\n",
|
||||
" self, sender: Agent, message: Union[Dict, str], recipient: Agent, silent: bool\n",
|
||||
" ):\n",
|
||||
" \"\"\"Agent sends a message to the user. Add the message to Zep.\"\"\"\n",
|
||||
"\n",
|
||||
" # Assume message is a string\n",
|
||||
" zep_messages = convert_to_zep_messages([{\"role\": \"assistant\", \"name\": self.name, \"content\": message}])\n",
|
||||
" await zep.memory.add(session_id=self.zep_session_id, messages=zep_messages)\n",
|
||||
"\n",
|
||||
" return message\n",
|
||||
"\n",
|
||||
" async def persist_user_messages(self, messages: list[dict[str, str]] | str):\n",
|
||||
" \"\"\"\n",
|
||||
" User sends a message to the agent. Add the message to Zep and\n",
|
||||
" update the system message with relevant facts from Zep.\n",
|
||||
" \"\"\"\n",
|
||||
" # Assume messages is a string\n",
|
||||
" zep_messages = convert_to_zep_messages([{\"role\": \"user\", \"content\": messages}])\n",
|
||||
" await zep.memory.add(session_id=self.zep_session_id, messages=zep_messages)\n",
|
||||
"\n",
|
||||
" memory = await zep.memory.get(self.zep_session_id, min_rating=MIN_FACT_RATING)\n",
|
||||
"\n",
|
||||
" # Update the system message with the relevant facts retrieved from Zep\n",
|
||||
" self.update_system_message(\n",
|
||||
" self.original_system_message\n",
|
||||
" + f\"\\n\\nRelevant facts about the user and their prior conversation:\\n{memory.relevant_facts}\"\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" return messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Zep User and Session Management\n",
|
||||
"\n",
|
||||
"### Zep User\n",
|
||||
"A Zep User represents an individual interacting with your application. Each User can have multiple Sessions associated with them, allowing you to track and manage interactions over time. The unique identifier for each user is their `UserID`, which can be any string value (e.g., username, email address, or UUID).\n",
|
||||
"\n",
|
||||
"### Zep Session\n",
|
||||
"A Session represents a conversation and can be associated with Users in a one-to-many relationship. Chat messages are added to Sessions, with each session having many messages.\n",
|
||||
"\n",
|
||||
"### Fact Rating\n",
|
||||
" \n",
|
||||
"Fact Rating is a feature in Zep that allows you to rate the importance or relevance of facts extracted from conversations. This helps in prioritizing and filtering information when retrieving memory artifacts. Here, we rate facts based on poignancy. We provide a definition of poignancy and several examples of highly poignant and low-poignancy facts. When retrieving memory, you can use the `min_rating` parameter to filter facts based on their importance.\n",
|
||||
" \n",
|
||||
"Fact Rating helps ensure the most relevant information, especially in long or complex conversations, is used to ground the agent.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Session(classifications=None, created_at='2024-10-07T21:12:13.952672Z', deleted_at=None, ended_at=None, fact_rating_instruction=FactRatingInstruction(examples=FactRatingExamples(high=\"The user received news of a family member's serious illness.\", low='The user bought a new brand of toothpaste.', medium='The user completed a challenging marathon.'), instruction='Rate the facts by poignancy. Highly poignant \\nfacts have a significant emotional impact or relevance to the user. \\nLow poignant facts are minimally relevant or of little emotional \\nsignificance.'), fact_version_uuid=None, facts=None, id=774, metadata=None, project_uuid='00000000-0000-0000-0000-000000000000', session_id='f3854ad0-5bd4-4814-a814-ec0880817953', updated_at='2024-10-07T21:12:13.952672Z', user_id='Cathy1023', uuid_='31ab3314-5ac8-4361-ad11-848fb7befedf')"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"bot_name = \"CareBot\"\n",
|
||||
"user_name = \"Cathy\"\n",
|
||||
"\n",
|
||||
"user_id = user_name + str(uuid.uuid4())[:4]\n",
|
||||
"session_id = str(uuid.uuid4())\n",
|
||||
"\n",
|
||||
"await zep.user.add(user_id=user_id)\n",
|
||||
"\n",
|
||||
"fact_rating_instruction = \"\"\"Rate the facts by poignancy. Highly poignant\n",
|
||||
" facts have a significant emotional impact or relevance to the user.\n",
|
||||
" Low poignant facts are minimally relevant or of little emotional significance.\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"fact_rating_examples = FactRatingExamples(\n",
|
||||
" high=\"The user received news of a family member's serious illness.\",\n",
|
||||
" medium=\"The user completed a challenging marathon.\",\n",
|
||||
" low=\"The user bought a new brand of toothpaste.\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"await zep.memory.add_session(\n",
|
||||
" user_id=user_id,\n",
|
||||
" session_id=session_id,\n",
|
||||
" fact_rating_instruction=FactRatingInstruction(\n",
|
||||
" instruction=fact_rating_instruction,\n",
|
||||
" examples=fact_rating_examples,\n",
|
||||
" ),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Preload a prior conversation into Zep\n",
|
||||
"\n",
|
||||
"We'll load a prior conversation into long-term memory. We'll use facts derived from this conversation when Cathy restarts the conversation with CareBot, ensuring Carebot has context."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"SuccessResponse(message='OK')"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat_history = [\n",
|
||||
" {\n",
|
||||
" \"role\": \"assistant\",\n",
|
||||
" \"name\": \"carebot\",\n",
|
||||
" \"content\": \"Hi Cathy, how are you doing today?\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"role\": \"user\",\n",
|
||||
" \"name\": \"Cathy\",\n",
|
||||
" \"content\": \"To be honest, I've been feeling a bit down and demotivated lately. It's been tough.\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"role\": \"assistant\",\n",
|
||||
" \"name\": \"CareBot\",\n",
|
||||
" \"content\": \"I'm sorry to hear that you're feeling down and demotivated, Cathy. It's understandable given the challenges you're facing. Can you tell me more about what's been going on?\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"role\": \"user\",\n",
|
||||
" \"name\": \"Cathy\",\n",
|
||||
" \"content\": \"Well, I'm really struggling to process the passing of my mother.\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"role\": \"assistant\",\n",
|
||||
" \"name\": \"CareBot\",\n",
|
||||
" \"content\": \"I'm deeply sorry for your loss, Cathy. Losing a parent is incredibly difficult. It's normal to struggle with grief, and there's no 'right' way to process it. Would you like to talk about your mother or how you're coping?\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"role\": \"user\",\n",
|
||||
" \"name\": \"Cathy\",\n",
|
||||
" \"content\": \"Yes, I'd like to talk about my mother. She was a kind and loving person.\",\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# Convert chat history to Zep messages\n",
|
||||
"zep_messages = convert_to_zep_messages(chat_history)\n",
|
||||
"\n",
|
||||
"await zep.memory.add(session_id=session_id, messages=zep_messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Review all facts in Zep\n",
|
||||
"\n",
|
||||
"We query all session facts for this user session. Only facts that meet the `MIN_FACT_RATING` threshold are returned."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"created_at='2024-10-07T21:12:15.96584Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='6a086a73-d4b8-4c1b-9b2f-08d5d326d813'\n",
|
||||
"created_at='2024-10-07T21:12:15.96584Z' fact='Cathy has been feeling down and demotivated lately.' rating=0.5 uuid_='e19d959c-2a01-4cc7-9d49-108719f1a749'\n",
|
||||
"created_at='2024-10-07T21:12:15.96584Z' fact='Cathy is struggling to process the passing of her mother.' rating=0.75 uuid_='d6c12a5d-d2a0-486e-b25d-3d4bdc5ff466'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = await zep.memory.get_session_facts(session_id=session_id, min_rating=MIN_FACT_RATING)\n",
|
||||
"\n",
|
||||
"for r in response.facts:\n",
|
||||
" print(r)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create the Autogen agent, CareBot, an instance of `ZepConversableAgent`\n",
|
||||
"\n",
|
||||
"We pass in the current `session_id` into the CareBot agent which allows it to retrieve relevant facts related to the conversation with Cathy."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"carebot_system_message = \"\"\"\n",
|
||||
"You are a compassionate mental health bot and caregiver. Review information about the user and their prior conversation below and respond accordingly.\n",
|
||||
"Keep responses empathetic and supportive. And remember, always prioritize the user's well-being and mental health. Keep your responses very concise and to the point.\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"agent = ZepConversableAgent(\n",
|
||||
" bot_name,\n",
|
||||
" system_message=carebot_system_message,\n",
|
||||
" llm_config={\"config_list\": config_list},\n",
|
||||
" function_map=None, # No registered functions, by default it is None.\n",
|
||||
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
|
||||
" zep_session_id=session_id,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create the Autogen agent, Cathy\n",
|
||||
"\n",
|
||||
"Cathy is a stand-in for a human. When building a production application, you'd replace Cathy with a human-in-the-loop pattern.\n",
|
||||
"\n",
|
||||
"**Note** that we're instructing Cathy to start the conversation with CareBit by asking about her previous session. This is an opportunity for us to test whether fact retrieval from Zep's long-term memory is working. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"cathy = ConversableAgent(\n",
|
||||
" user_name,\n",
|
||||
" system_message=\"You are returning to your conversation with CareBot, a mental health bot. Ask the bot about your previous session.\",\n",
|
||||
" llm_config={\"config_list\": config_list},\n",
|
||||
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Start the conversation\n",
|
||||
"\n",
|
||||
"We use Autogen's `a_initiate_chat` method to get the two agents conversing. CareBot is the primary agent.\n",
|
||||
"\n",
|
||||
"**NOTE** how Carebot is able to recall the past conversation about Cathy's mother in detail, having had relevant facts from Zep added to its system prompt."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"result = await agent.a_initiate_chat(\n",
|
||||
" cathy,\n",
|
||||
" message=\"Hi Cathy, nice to see you again. How are you doing today?\",\n",
|
||||
" max_turns=3,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Review current facts in Zep\n",
|
||||
"\n",
|
||||
"Let's see how the facts have evolved as the conversation has progressed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact=\"Cathy wants to reflect on a previous conversation about her mother and explore the topic of her mother's passing further.\" rating=0.75 uuid_='56488eeb-d8ac-4b2f-8acc-75f71b56ad76'\n",
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact='Cathy is struggling to process the passing of her mother and has been feeling down and demotivated lately.' rating=0.75 uuid_='0fea3f05-ed1a-4e39-a092-c91f8af9e501'\n",
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='131de203-2984-4cba-9aef-e500611f06d9'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = await zep.memory.get_session_facts(session_id, min_rating=MIN_FACT_RATING)\n",
|
||||
"\n",
|
||||
"for r in response.facts:\n",
|
||||
" print(r)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Search over Facts in Zep's long-term memory\n",
|
||||
"\n",
|
||||
"In addition to the `memory.get` method which uses the current conversation to retrieve facts, we can also search Zep with our own keywords. Here, we retrieve facts using a query. Again, we use fact ratings to limit the returned facts to only those with a high poignancy rating.\n",
|
||||
"\n",
|
||||
"The `memory.search_sessions` API may be used as an Agent tool, enabling an agent to search across user memory for relevant facts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact='Cathy describes her mother as a kind and loving person.' rating=0.5 uuid_='131de203-2984-4cba-9aef-e500611f06d9'\n",
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact='Cathy is struggling to process the passing of her mother and has been feeling down and demotivated lately.' rating=0.75 uuid_='0fea3f05-ed1a-4e39-a092-c91f8af9e501'\n",
|
||||
"created_at='2024-10-07T20:04:28.397184Z' fact=\"Cathy wants to reflect on a previous conversation about her mother and explore the topic of her mother's passing further.\" rating=0.75 uuid_='56488eeb-d8ac-4b2f-8acc-75f71b56ad76'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = await zep.memory.search_sessions(\n",
|
||||
" text=\"What do you know about Cathy's family?\",\n",
|
||||
" user_id=user_id,\n",
|
||||
" search_scope=\"facts\",\n",
|
||||
" min_fact_rating=MIN_FACT_RATING,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for r in response.results:\n",
|
||||
" print(r.fact)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"front_matter": {
|
||||
"tags": [
|
||||
"memory"
|
||||
],
|
||||
"description": "Agent Memory with Zep."
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -0,0 +1,579 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Using RetrieveChat Powered by Couchbase Capella for Retrieve Augmented Code Generation and Question Answering\n",
|
||||
"\n",
|
||||
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
||||
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
||||
"\n",
|
||||
"RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n",
|
||||
"\n",
|
||||
"## Table of Contents\n",
|
||||
"We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n",
|
||||
"\n",
|
||||
"- [Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install pyautogen[retrievechat-couchbase] flaml[automl]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"Ensure you have a Couchbase Capella cluster running. Read more on how to get started [here](https://docs.couchbase.com/cloud/get-started/intro.html)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set your API Endpoint\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"models to use: ['gpt-4o-mini']\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import sys\n",
|
||||
"\n",
|
||||
"from autogen import AssistantAgent\n",
|
||||
"\n",
|
||||
"sys.path.append(os.path.abspath(\"/workspaces/autogen/autogen/agentchat/contrib\"))\n",
|
||||
"\n",
|
||||
"from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n",
|
||||
"\n",
|
||||
"# Accepted file formats for that can be stored in\n",
|
||||
"# a vector database instance\n",
|
||||
"from autogen.retrieve_utils import TEXT_FORMATS\n",
|
||||
"\n",
|
||||
"config_list = [{\"model\": \"gpt-4o-mini\", \"api_key\": os.environ[\"OPENAI_API_KEY\"], \"api_type\": \"openai\"}]\n",
|
||||
"assert len(config_list) > 0\n",
|
||||
"print(\"models to use: \", [config_list[i][\"model\"] for i in range(len(config_list))])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"## Construct agents for RetrieveChat\n",
|
||||
"\n",
|
||||
"We start by initializing the `AssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for AssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.message_generator` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Accepted file formats for `docs_path`:\n",
|
||||
"['txt', 'json', 'csv', 'tsv', 'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml', 'pdf']\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\"Accepted file formats for `docs_path`:\")\n",
|
||||
"print(TEXT_FORMATS)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# 1. create an AssistantAgent instance named \"assistant\"\n",
|
||||
"assistant = AssistantAgent(\n",
|
||||
" name=\"assistant\",\n",
|
||||
" system_message=\"You are a helpful assistant.\",\n",
|
||||
" llm_config={\n",
|
||||
" \"timeout\": 600,\n",
|
||||
" \"cache_seed\": 42,\n",
|
||||
" \"config_list\": config_list,\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# 2. create the RetrieveUserProxyAgent instance named \"ragproxyagent\"\n",
|
||||
"# Refer to https://microsoft.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent\n",
|
||||
"# and https://microsoft.github.io/autogen/docs/reference/agentchat/contrib/vectordb/couchbase\n",
|
||||
"# for more information on the RetrieveUserProxyAgent and CouchbaseVectorDB\n",
|
||||
"ragproxyagent = RetrieveUserProxyAgent(\n",
|
||||
" name=\"ragproxyagent\",\n",
|
||||
" human_input_mode=\"NEVER\",\n",
|
||||
" max_consecutive_auto_reply=3,\n",
|
||||
" retrieve_config={\n",
|
||||
" \"task\": \"code\",\n",
|
||||
" \"docs_path\": [\n",
|
||||
" \"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md\",\n",
|
||||
" \"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md\",\n",
|
||||
" ],\n",
|
||||
" \"chunk_token_size\": 2000,\n",
|
||||
" \"model\": config_list[0][\"model\"],\n",
|
||||
" \"vector_db\": \"couchbase\", # Couchbase Capella VectorDB\n",
|
||||
" \"collection_name\": \"demo_collection\", # Couchbase Capella collection name to be utilized/created\n",
|
||||
" \"db_config\": {\n",
|
||||
" \"connection_string\": os.environ[\"CB_CONN_STR\"], # Couchbase Capella connection string\n",
|
||||
" \"username\": os.environ[\"CB_USERNAME\"], # Couchbase Capella username\n",
|
||||
" \"password\": os.environ[\"CB_PASSWORD\"], # Couchbase Capella password\n",
|
||||
" \"bucket_name\": \"test_db\", # Couchbase Capella bucket name\n",
|
||||
" \"scope_name\": \"test_scope\", # Couchbase Capella scope name\n",
|
||||
" \"index_name\": \"vector_index\", # Couchbase Capella index name to be created\n",
|
||||
" },\n",
|
||||
" \"get_or_create\": True, # set to False if you don't want to reuse an existing collection\n",
|
||||
" \"overwrite\": False, # set to True if you want to overwrite an existing collection, each overwrite will force a index creation and reupload of documents\n",
|
||||
" },\n",
|
||||
" code_execution_config=False, # set to False if you don't want to execute the code\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 1\n",
|
||||
"\n",
|
||||
"[Back to top](#table-of-contents)\n",
|
||||
"\n",
|
||||
"Use RetrieveChat to help generate sample code and automatically run the code and fix errors if there is any.\n",
|
||||
"\n",
|
||||
"Problem: Which API should I use if I want to use FLAML for a classification task and I want to train the model in 30 seconds. Use spark to parallel the training. Force cancel jobs if time limit is reached.\n",
|
||||
"\n",
|
||||
"Note: You may need to create an index on the cluster to query"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2024-10-16 12:08:07,062 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - \u001b[32mUse the existing collection `demo_collection`.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Trying to create collection.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"2024-10-16 12:08:07,953 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Found 2 chunks.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"VectorDB returns doc_ids: [['bdfbc921', '7968cf3c']]\n",
|
||||
"\u001b[32mAdding content of doc bdfbc921 to context.\u001b[0m\n",
|
||||
"\u001b[32mAdding content of doc 7968cf3c to context.\u001b[0m\n",
|
||||
"\u001b[33mragproxyagent\u001b[0m (to assistant):\n",
|
||||
"\n",
|
||||
"You're a retrieve augmented coding assistant. You answer user's questions based on your own knowledge and the\n",
|
||||
"context provided by the user.\n",
|
||||
"If you can't answer the question with or without the current context, you should reply exactly `UPDATE CONTEXT`.\n",
|
||||
"For code generation, you must obey the following rules:\n",
|
||||
"Rule 1. You MUST NOT install any packages because all the packages needed are already installed.\n",
|
||||
"Rule 2. You must follow the formats below to write your code:\n",
|
||||
"```language\n",
|
||||
"# your code\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"User's question is: How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached.\n",
|
||||
"\n",
|
||||
"Context is: # Integrate - Spark\n",
|
||||
"\n",
|
||||
"FLAML has integrated Spark for distributed training. There are two main aspects of integration with Spark:\n",
|
||||
"\n",
|
||||
"- Use Spark ML estimators for AutoML.\n",
|
||||
"- Use Spark to run training in parallel spark jobs.\n",
|
||||
"\n",
|
||||
"## Spark ML Estimators\n",
|
||||
"\n",
|
||||
"FLAML integrates estimators based on Spark ML models. These models are trained in parallel using Spark, so we called them Spark estimators. To use these models, you first need to organize your data in the required format.\n",
|
||||
"\n",
|
||||
"### Data\n",
|
||||
"\n",
|
||||
"For Spark estimators, AutoML only consumes Spark data. FLAML provides a convenient function `to_pandas_on_spark` in the `flaml.automl.spark.utils` module to convert your data into a pandas-on-spark (`pyspark.pandas`) dataframe/series, which Spark estimators require.\n",
|
||||
"\n",
|
||||
"This utility function takes data in the form of a `pandas.Dataframe` or `pyspark.sql.Dataframe` and converts it into a pandas-on-spark dataframe. It also takes `pandas.Series` or `pyspark.sql.Dataframe` and converts it into a [pandas-on-spark](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html) series. If you pass in a `pyspark.pandas.Dataframe`, it will not make any changes.\n",
|
||||
"\n",
|
||||
"This function also accepts optional arguments `index_col` and `default_index_type`.\n",
|
||||
"\n",
|
||||
"- `index_col` is the column name to use as the index, default is None.\n",
|
||||
"- `default_index_type` is the default index type, default is \"distributed-sequence\". More info about default index type could be found on Spark official [documentation](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/options.html#default-index-type)\n",
|
||||
"\n",
|
||||
"Here is an example code snippet for Spark Data:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"import pandas as pd\n",
|
||||
"from flaml.automl.spark.utils import to_pandas_on_spark\n",
|
||||
"\n",
|
||||
"# Creating a dictionary\n",
|
||||
"data = {\n",
|
||||
" \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n",
|
||||
" \"Age_Years\": [20, 15, 10, 7, 25],\n",
|
||||
" \"Price\": [100000, 200000, 300000, 240000, 120000],\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"# Creating a pandas DataFrame\n",
|
||||
"dataframe = pd.DataFrame(data)\n",
|
||||
"label = \"Price\"\n",
|
||||
"\n",
|
||||
"# Convert to pandas-on-spark dataframe\n",
|
||||
"psdf = to_pandas_on_spark(dataframe)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"To use Spark ML models you need to format your data appropriately. Specifically, use [`VectorAssembler`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.VectorAssembler.html) to merge all feature columns into a single vector column.\n",
|
||||
"\n",
|
||||
"Here is an example of how to use it:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"from pyspark.ml.feature import VectorAssembler\n",
|
||||
"\n",
|
||||
"columns = psdf.columns\n",
|
||||
"feature_cols = [col for col in columns if col != label]\n",
|
||||
"featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n",
|
||||
"psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[\"index\", \"features\"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Later in conducting the experiment, use your pandas-on-spark data like non-spark data and pass them using `X_train, y_train` or `dataframe, label`.\n",
|
||||
"\n",
|
||||
"### Estimators\n",
|
||||
"\n",
|
||||
"#### Model List\n",
|
||||
"\n",
|
||||
"- `lgbm_spark`: The class for fine-tuning Spark version LightGBM models, using [SynapseML](https://microsoft.github.io/SynapseML/docs/features/lightgbm/about/) API.\n",
|
||||
"\n",
|
||||
"#### Usage\n",
|
||||
"\n",
|
||||
"First, prepare your data in the required format as described in the previous section.\n",
|
||||
"\n",
|
||||
"By including the models you intend to try in the `estimators_list` argument to `flaml.automl`, FLAML will start trying configurations for these models. If your input is Spark data, FLAML will also use estimators with the `_spark` postfix by default, even if you haven't specified them.\n",
|
||||
"\n",
|
||||
"Here is an example code snippet using SparkML models in AutoML:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"import flaml\n",
|
||||
"\n",
|
||||
"# prepare your data in pandas-on-spark format as we previously mentioned\n",
|
||||
"\n",
|
||||
"automl = flaml.AutoML()\n",
|
||||
"settings = {\n",
|
||||
" \"time_budget\": 30,\n",
|
||||
" \"metric\": \"r2\",\n",
|
||||
" \"estimator_list\": [\"lgbm_spark\"], # this setting is optional\n",
|
||||
" \"task\": \"regression\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl.fit(\n",
|
||||
" dataframe=psdf,\n",
|
||||
" label=label,\n",
|
||||
" **settings,\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/automl_bankrupt_synapseml.ipynb)\n",
|
||||
"\n",
|
||||
"## Parallel Spark Jobs\n",
|
||||
"\n",
|
||||
"You can activate Spark as the parallel backend during parallel tuning in both [AutoML](/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning) and [Hyperparameter Tuning](/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning), by setting the `use_spark` to `true`. FLAML will dispatch your job to the distributed Spark backend using [`joblib-spark`](https://github.com/joblib/joblib-spark).\n",
|
||||
"\n",
|
||||
"Please note that you should not set `use_spark` to `true` when applying AutoML and Tuning for Spark Data. This is because only SparkML models will be used for Spark Data in AutoML and Tuning. As SparkML models run in parallel, there is no need to distribute them with `use_spark` again.\n",
|
||||
"\n",
|
||||
"All the Spark-related arguments are stated below. These arguments are available in both Hyperparameter Tuning and AutoML:\n",
|
||||
"\n",
|
||||
"- `use_spark`: boolean, default=False | Whether to use spark to run the training in parallel spark jobs. This can be used to accelerate training on large models and large datasets, but will incur more overhead in time and thus slow down training in some cases. GPU training is not supported yet when use_spark is True. For Spark clusters, by default, we will launch one trial per executor. However, sometimes we want to launch more trials than the number of executors (e.g., local mode). In this case, we can set the environment variable `FLAML_MAX_CONCURRENT` to override the detected `num_executors`. The final number of concurrent trials will be the minimum of `n_concurrent_trials` and `num_executors`.\n",
|
||||
"- `n_concurrent_trials`: int, default=1 | The number of concurrent trials. When n_concurrent_trials > 1, FLAML performes parallel tuning.\n",
|
||||
"- `force_cancel`: boolean, default=False | Whether to forcely cancel Spark jobs if the search time exceeded the time budget. Spark jobs include parallel tuning jobs and Spark-based model training jobs.\n",
|
||||
"\n",
|
||||
"An example code snippet for using parallel Spark jobs:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"import flaml\n",
|
||||
"\n",
|
||||
"automl_experiment = flaml.AutoML()\n",
|
||||
"automl_settings = {\n",
|
||||
" \"time_budget\": 30,\n",
|
||||
" \"metric\": \"r2\",\n",
|
||||
" \"task\": \"regression\",\n",
|
||||
" \"n_concurrent_trials\": 2,\n",
|
||||
" \"use_spark\": True,\n",
|
||||
" \"force_cancel\": True, # Activating the force_cancel option can immediately halt Spark jobs once they exceed the allocated time_budget.\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl.fit(\n",
|
||||
" dataframe=dataframe,\n",
|
||||
" label=label,\n",
|
||||
" **automl_settings,\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"[Link to notebook](https://github.com/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/integrate_spark.ipynb)\n",
|
||||
"# Research\n",
|
||||
"\n",
|
||||
"For technical details, please check our research publications.\n",
|
||||
"\n",
|
||||
"- [FLAML: A Fast and Lightweight AutoML Library](https://www.microsoft.com/en-us/research/publication/flaml-a-fast-and-lightweight-automl-library/). Chi Wang, Qingyun Wu, Markus Weimer, Erkang Zhu. MLSys 2021.\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wang2021flaml,\n",
|
||||
" title={FLAML: A Fast and Lightweight AutoML Library},\n",
|
||||
" author={Chi Wang and Qingyun Wu and Markus Weimer and Erkang Zhu},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={MLSys},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Frugal Optimization for Cost-related Hyperparameters](https://arxiv.org/abs/2005.01571). Qingyun Wu, Chi Wang, Silu Huang. AAAI 2021.\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wu2021cfo,\n",
|
||||
" title={Frugal Optimization for Cost-related Hyperparameters},\n",
|
||||
" author={Qingyun Wu and Chi Wang and Silu Huang},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={AAAI},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Economical Hyperparameter Optimization With Blended Search Strategy](https://www.microsoft.com/en-us/research/publication/economical-hyperparameter-optimization-with-blended-search-strategy/). Chi Wang, Qingyun Wu, Silu Huang, Amin Saied. ICLR 2021.\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wang2021blendsearch,\n",
|
||||
" title={Economical Hyperparameter Optimization With Blended Search Strategy},\n",
|
||||
" author={Chi Wang and Qingyun Wu and Silu Huang and Amin Saied},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={ICLR},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models](https://aclanthology.org/2021.acl-long.178.pdf). Susan Xueqing Liu, Chi Wang. ACL 2021.\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{liuwang2021hpolm,\n",
|
||||
" title={An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models},\n",
|
||||
" author={Susan Xueqing Liu and Chi Wang},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={ACL},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [ChaCha for Online AutoML](https://www.microsoft.com/en-us/research/publication/chacha-for-online-automl/). Qingyun Wu, Chi Wang, John Langford, Paul Mineiro and Marco Rossi. ICML 2021.\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wu2021chacha,\n",
|
||||
" title={ChaCha for Online AutoML},\n",
|
||||
" author={Qingyun Wu and Chi Wang and John Langford and Paul Mineiro and Marco Rossi},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={ICML},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Fair AutoML](https://arxiv.org/abs/2111.06495). Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2111.06495 (2021).\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wuwang2021fairautoml,\n",
|
||||
" title={Fair AutoML},\n",
|
||||
" author={Qingyun Wu and Chi Wang},\n",
|
||||
" year={2021},\n",
|
||||
" booktitle={ArXiv preprint arXiv:2111.06495},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Mining Robust Default Configurations for Resource-constrained AutoML](https://arxiv.org/abs/2202.09927). Moe Kayali, Chi Wang. ArXiv preprint arXiv:2202.09927 (2022).\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{kayaliwang2022default,\n",
|
||||
" title={Mining Robust Default Configurations for Resource-constrained AutoML},\n",
|
||||
" author={Moe Kayali and Chi Wang},\n",
|
||||
" year={2022},\n",
|
||||
" booktitle={ArXiv preprint arXiv:2202.09927},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives](https://openreview.net/forum?id=0Ij9_q567Ma). Shaokun Zhang, Feiran Jia, Chi Wang, Qingyun Wu. ICLR 2023 (notable-top-5%).\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{zhang2023targeted,\n",
|
||||
" title={Targeted Hyperparameter Optimization with Lexicographic Preferences Over Multiple Objectives},\n",
|
||||
" author={Shaokun Zhang and Feiran Jia and Chi Wang and Qingyun Wu},\n",
|
||||
" booktitle={International Conference on Learning Representations},\n",
|
||||
" year={2023},\n",
|
||||
" url={https://openreview.net/forum?id=0Ij9_q567Ma},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah. ArXiv preprint arXiv:2303.04673 (2023).\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wang2023EcoOptiGen,\n",
|
||||
" title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},\n",
|
||||
" author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},\n",
|
||||
" year={2023},\n",
|
||||
" booktitle={ArXiv preprint arXiv:2303.04673},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023).\n",
|
||||
"\n",
|
||||
"```bibtex\n",
|
||||
"@inproceedings{wu2023empirical,\n",
|
||||
" title={An Empirical Study on Challenging Math Problem Solving with GPT-4},\n",
|
||||
" author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang},\n",
|
||||
" year={2023},\n",
|
||||
" booktitle={ArXiv preprint arXiv:2306.01337},\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33massistant\u001b[0m (to ragproxyagent):\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"import pandas as pd\n",
|
||||
"from pyspark.ml.feature import VectorAssembler\n",
|
||||
"import flaml\n",
|
||||
"from flaml.automl.spark.utils import to_pandas_on_spark\n",
|
||||
"\n",
|
||||
"# Creating a dictionary for the example data\n",
|
||||
"data = {\n",
|
||||
" \"Square_Feet\": [800, 1200, 1800, 1500, 850],\n",
|
||||
" \"Age_Years\": [20, 15, 10, 7, 25],\n",
|
||||
" \"Price\": [100000, 200000, 300000, 240000, 120000],\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"# Creating a pandas DataFrame\n",
|
||||
"dataframe = pd.DataFrame(data)\n",
|
||||
"label = \"Price\"\n",
|
||||
"\n",
|
||||
"# Convert to pandas-on-spark dataframe\n",
|
||||
"psdf = to_pandas_on_spark(dataframe)\n",
|
||||
"\n",
|
||||
"# Prepare features using VectorAssembler\n",
|
||||
"columns = psdf.columns\n",
|
||||
"feature_cols = [col for col in columns if col != label]\n",
|
||||
"featurizer = VectorAssembler(inputCols=feature_cols, outputCol=\"features\")\n",
|
||||
"psdf = featurizer.transform(psdf.to_spark(index_col=\"index\"))[[\"index\", \"features\"]]\n",
|
||||
"\n",
|
||||
"# Setting up and running FLAML for AutoML with Spark\n",
|
||||
"automl = flaml.AutoML()\n",
|
||||
"automl_settings = {\n",
|
||||
" \"time_budget\": 30, # Set the time budget to 30 seconds\n",
|
||||
" \"metric\": \"r2\", # Performance metric\n",
|
||||
" \"task\": \"regression\", # Problem type\n",
|
||||
" \"n_concurrent_trials\": 2, # Number of concurrent trials\n",
|
||||
" \"use_spark\": True, # Use Spark for parallel jobs\n",
|
||||
" \"force_cancel\": True, # Force cancel jobs if time limit is reached\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl.fit(\n",
|
||||
" dataframe=psdf,\n",
|
||||
" label=label,\n",
|
||||
" **automl_settings\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mragproxyagent\u001b[0m (to assistant):\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33massistant\u001b[0m (to ragproxyagent):\n",
|
||||
"\n",
|
||||
"UPDATE CONTEXT\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[32mUpdating context and resetting conversation.\u001b[0m\n",
|
||||
"VectorDB returns doc_ids: [['bdfbc921', '7968cf3c']]\n",
|
||||
"\u001b[32mNo more context, will terminate.\u001b[0m\n",
|
||||
"\u001b[33mragproxyagent\u001b[0m (to assistant):\n",
|
||||
"\n",
|
||||
"TERMINATE\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# reset the assistant. Always reset the assistant before starting a new conversation.\n",
|
||||
"assistant.reset()\n",
|
||||
"\n",
|
||||
"# given a problem, we use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.\n",
|
||||
"# the assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing.\n",
|
||||
"# The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected.\n",
|
||||
"# With human-in-loop, the conversation will continue until the user says \"exit\".\n",
|
||||
"code_problem = \"How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached.\"\n",
|
||||
"chat_result = ragproxyagent.initiate_chat(assistant, message=ragproxyagent.message_generator, problem=code_problem)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"front_matter": {
|
||||
"description": "Explore the use of AutoGen's RetrieveChat for tasks like code generation from docstrings, answering complex questions with human feedback, and exploiting features like Update Context, custom prompts, and few-shot learning.",
|
||||
"tags": [
|
||||
"RAG"
|
||||
]
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
},
|
||||
"skip_test": "Requires interactive usage"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
|
@ -5,3 +5,4 @@ xpub_url: str = "tcp://127.0.0.1:5555"
|
|||
xsub_url: str = "tcp://127.0.0.1:5556"
|
||||
router_url: str = "tcp://127.0.0.1:5557"
|
||||
dealer_url: str = "tcp://127.0.0.1:5558"
|
||||
USE_COLOR_LOGGING = True
|
||||
|
|
|
@ -1,57 +1,38 @@
|
|||
import threading
|
||||
import traceback
|
||||
|
||||
import zmq
|
||||
|
||||
from .Config import xpub_url
|
||||
from .DebugLog import Debug, Error, Info
|
||||
from .actor_runtime import IMessageReceiver, IMsgActor, IRuntime
|
||||
from .debug_log import Debug, Info
|
||||
|
||||
|
||||
class Actor:
|
||||
class Actor(IMsgActor):
|
||||
def __init__(self, agent_name: str, description: str, start_thread: bool = True):
|
||||
"""Initialize the Actor with a name, description, and threading option."""
|
||||
self.actor_name: str = agent_name
|
||||
self.agent_description: str = description
|
||||
self.run = False
|
||||
self._start_event = threading.Event()
|
||||
self._start_thread = start_thread
|
||||
self._msg_receiver: IMessageReceiver = None
|
||||
self._runtime: IRuntime = None
|
||||
|
||||
def on_connect(self, network):
|
||||
Debug(self.actor_name, f"is connecting to {network}")
|
||||
def on_connect(self):
|
||||
"""Connect the actor to the runtime."""
|
||||
Debug(self.actor_name, f"is connecting to {self._runtime}")
|
||||
Debug(self.actor_name, "connected")
|
||||
|
||||
def on_txt_msg(self, msg: str, msg_type: str, receiver: str, sender: str) -> bool:
|
||||
"""Handle incoming text messages."""
|
||||
Info(self.actor_name, f"InBox: {msg}")
|
||||
return True
|
||||
|
||||
def on_bin_msg(self, msg: bytes, msg_type: str, receiver: str, sender: str) -> bool:
|
||||
"""Handle incoming binary messages."""
|
||||
Info(self.actor_name, f"Msg: receiver=[{receiver}], msg_type=[{msg_type}]")
|
||||
return True
|
||||
|
||||
def _msg_loop_init(self):
|
||||
Debug(self.actor_name, "recv thread started")
|
||||
self._socket: zmq.Socket = self._context.socket(zmq.SUB)
|
||||
self._socket.setsockopt(zmq.RCVTIMEO, 500)
|
||||
self._socket.connect(xpub_url)
|
||||
str_topic = f"{self.actor_name}"
|
||||
Debug(self.actor_name, f"subscribe to: {str_topic}")
|
||||
self._socket.setsockopt_string(zmq.SUBSCRIBE, f"{str_topic}")
|
||||
self._start_event.set()
|
||||
|
||||
def get_message(self):
|
||||
try:
|
||||
topic, msg_type, sender_topic, msg = self._socket.recv_multipart()
|
||||
topic = topic.decode("utf-8") # Convert bytes to string
|
||||
msg_type = msg_type.decode("utf-8") # Convert bytes to string
|
||||
sender_topic = sender_topic.decode("utf-8") # Convert bytes to string
|
||||
except zmq.Again:
|
||||
return None # No message received, continue to next iteration
|
||||
except Exception as e:
|
||||
Error(self.actor_name, f"recv thread encountered an error: {e}")
|
||||
traceback.print_exc()
|
||||
return None
|
||||
return topic, msg_type, sender_topic, msg
|
||||
|
||||
def dispatch_message(self, message):
|
||||
"""Dispatch the received message based on its type."""
|
||||
if message is None:
|
||||
return
|
||||
topic, msg_type, sender_topic, msg = message
|
||||
|
@ -65,40 +46,50 @@ class Actor:
|
|||
if not self.on_bin_msg(msg, msg_type, topic, sender_topic):
|
||||
self.run = False
|
||||
|
||||
def get_message(self):
|
||||
"""Retrieve a message from the runtime implementation."""
|
||||
return self._msg_receiver.get_message()
|
||||
|
||||
def _msg_loop(self):
|
||||
"""Main message loop for receiving and dispatching messages."""
|
||||
try:
|
||||
self._msg_loop_init()
|
||||
self._msg_receiver = self._runtime.get_new_msg_receiver()
|
||||
self._msg_receiver.init(self.actor_name)
|
||||
self._start_event.set()
|
||||
while self.run:
|
||||
message = self.get_message()
|
||||
message = self._msg_receiver.get_message()
|
||||
self.dispatch_message(message)
|
||||
except Exception as e:
|
||||
Debug(self.actor_name, f"recv thread encountered an error: {e}")
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
self.run = False
|
||||
# In case there was an exception at startup signal
|
||||
# the main thread.
|
||||
self._start_event.set()
|
||||
self.run = False
|
||||
Debug(self.actor_name, "recv thread ended")
|
||||
|
||||
def on_start(self, context: zmq.Context):
|
||||
self._context = context
|
||||
self.run: bool = True
|
||||
def on_start(self, runtime: IRuntime):
|
||||
"""Start the actor and its message receiving thread if applicable."""
|
||||
self._runtime = runtime # Save the runtime
|
||||
self.run = True
|
||||
if self._start_thread:
|
||||
self._thread = threading.Thread(target=self._msg_loop)
|
||||
self._thread.start()
|
||||
self._start_event.wait()
|
||||
else:
|
||||
self._msg_loop_init()
|
||||
self._msg_receiver = self._runtime.get_new_msg_receiver()
|
||||
self._msg_receiver.init(self.actor_name)
|
||||
|
||||
def disconnect_network(self, network):
|
||||
"""Disconnect the actor from the network."""
|
||||
Debug(self.actor_name, f"is disconnecting from {network}")
|
||||
Debug(self.actor_name, "disconnected")
|
||||
self.stop()
|
||||
|
||||
def stop(self):
|
||||
"""Stop the actor and its message receiver."""
|
||||
self.run = False
|
||||
if self._start_thread:
|
||||
self._thread.join()
|
||||
self._socket.setsockopt(zmq.LINGER, 0)
|
||||
self._socket.close()
|
||||
self._msg_receiver.stop()
|
|
@ -0,0 +1,83 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Optional, Tuple
|
||||
|
||||
|
||||
class IActorConnector(ABC):
|
||||
"""
|
||||
Abstract base class for actor connectors. Each runtime will have a different implementation.
|
||||
Obtain an instance of the correct connector from the runtime by calling the runtime's find_by_xyz
|
||||
method.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def send_txt_msg(self, msg: str) -> None:
|
||||
"""
|
||||
Send a text message to the actor.
|
||||
|
||||
Args:
|
||||
msg (str): The text message to send.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def send_bin_msg(self, msg_type: str, msg: bytes) -> None:
|
||||
"""
|
||||
Send a binary message to the actor.
|
||||
|
||||
Args:
|
||||
msg_type (str): The type of the binary message.
|
||||
msg (bytes): The binary message to send.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def send_proto_msg(self, msg: Any) -> None:
|
||||
"""
|
||||
Send a protocol buffer message to the actor.
|
||||
|
||||
Args:
|
||||
msg (Any): The protocol buffer message to send.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def send_recv_proto_msg(
|
||||
self, msg: Any, num_attempts: int = 5
|
||||
) -> Tuple[Optional[str], Optional[str], Optional[bytes]]:
|
||||
"""
|
||||
Send a protocol buffer message and receive a response from the actor.
|
||||
|
||||
Args:
|
||||
msg (Any): The protocol buffer message to send.
|
||||
num_attempts (int, optional): Number of attempts to send and receive. Defaults to 5.
|
||||
|
||||
Returns:
|
||||
Tuple[Optional[str], Optional[str], Optional[bytes]]: A tuple containing the topic,
|
||||
message type, and response message, or None if no response is received.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def send_recv_msg(
|
||||
self, msg_type: str, msg: bytes, num_attempts: int = 5
|
||||
) -> Tuple[Optional[str], Optional[str], Optional[bytes]]:
|
||||
"""
|
||||
Send a binary message and receive a response from the actor.
|
||||
|
||||
Args:
|
||||
msg_type (str): The type of the binary message.
|
||||
msg (bytes): The binary message to send.
|
||||
num_attempts (int, optional): Number of attempts to send and receive. Defaults to 5.
|
||||
|
||||
Returns:
|
||||
Tuple[Optional[str], Optional[str], Optional[bytes]]: A tuple containing the topic,
|
||||
message type, and response message, or None if no response is received.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def close(self) -> None:
|
||||
"""
|
||||
Close the actor connector and release any resources.
|
||||
"""
|
||||
pass
|
|
@ -1,36 +1,108 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import List
|
||||
|
||||
from .Actor import Actor
|
||||
from .ActorConnector import ActorConnector
|
||||
from .actor_connector import IActorConnector
|
||||
from .proto.CAP_pb2 import ActorInfo
|
||||
|
||||
|
||||
class IRuntime(ABC):
|
||||
class IMsgActor(ABC):
|
||||
"""Abstract base class for message based actors."""
|
||||
|
||||
@abstractmethod
|
||||
def register(self, actor: Actor):
|
||||
def on_connect(self, runtime: "IRuntime"):
|
||||
"""Called when the actor connects to the runtime."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def on_txt_msg(self, msg: str, msg_type: str, receiver: str, sender: str) -> bool:
|
||||
"""Handle incoming text messages."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def on_bin_msg(self, msg: bytes, msg_type: str, receiver: str, sender: str) -> bool:
|
||||
"""Handle incoming binary messages."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def on_start(self):
|
||||
"""Called when the actor starts."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def stop(self):
|
||||
"""Stop the actor."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def dispatch_message(self, message):
|
||||
"""Dispatch the received message based on its type."""
|
||||
pass
|
||||
|
||||
|
||||
class IMessageReceiver(ABC):
|
||||
"""Abstract base class for message receivers. Implementations are runtime specific."""
|
||||
|
||||
@abstractmethod
|
||||
def init(self, actor_name: str):
|
||||
"""Initialize the message receiver."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def add_listener(self, topic: str):
|
||||
"""Add a topic to the message receiver."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_message(self):
|
||||
"""Retrieve a message from the runtime implementation."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def stop(self):
|
||||
"""Stop the message receiver."""
|
||||
pass
|
||||
|
||||
|
||||
# Abstract base class for the runtime environment
|
||||
class IRuntime(ABC):
|
||||
"""Abstract base class for the actor runtime environment."""
|
||||
|
||||
@abstractmethod
|
||||
def register(self, actor: IMsgActor):
|
||||
"""Register an actor with the runtime."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_new_msg_receiver(self) -> IMessageReceiver:
|
||||
"""Create and return a new message receiver."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def connect(self):
|
||||
"""Connect the runtime to the messaging system."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def disconnect(self):
|
||||
"""Disconnect the runtime from the messaging system."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def find_by_topic(self, topic: str) -> ActorConnector:
|
||||
def find_by_topic(self, topic: str) -> IActorConnector:
|
||||
"""Find an actor connector by topic."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def find_by_name(self, name: str) -> ActorConnector:
|
||||
def find_by_name(self, name: str) -> IActorConnector:
|
||||
"""Find an actor connector by name."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def find_termination(self) -> ActorConnector:
|
||||
def find_termination(self) -> IActorConnector:
|
||||
"""Find the termination actor connector."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def find_by_name_regex(self, name_regex) -> List[ActorInfo]:
|
||||
def find_by_name_regex(self, name_regex) -> List["ActorInfo"]:
|
||||
"""Find actors by name using a regular expression."""
|
||||
pass
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
import zmq
|
||||
|
||||
from autogencap.Actor import Actor
|
||||
from autogencap.constants import Termination_Topic
|
||||
from autogencap.DebugLog import Debug
|
||||
|
||||
|
||||
class AGActor(Actor):
|
||||
def on_start(self, context: zmq.Context):
|
||||
super().on_start(context)
|
||||
str_topic = Termination_Topic
|
||||
Debug(self.actor_name, f"subscribe to: {str_topic}")
|
||||
self._socket.setsockopt_string(zmq.SUBSCRIBE, f"{str_topic}")
|
|
@ -0,0 +1,11 @@
|
|||
from autogencap.actor import Actor
|
||||
from autogencap.constants import Termination_Topic
|
||||
from autogencap.debug_log import Debug
|
||||
|
||||
|
||||
class AGActor(Actor):
|
||||
def on_start(self, runtime):
|
||||
super().on_start(runtime)
|
||||
str_topic = Termination_Topic
|
||||
self._msg_receiver.add_listener(str_topic)
|
||||
Debug(self.actor_name, f"subscribe to: {str_topic}")
|
|
@ -4,7 +4,7 @@ from typing import Callable, Dict, List, Optional, Union
|
|||
from autogen import Agent, ConversableAgent
|
||||
|
||||
from ..actor_runtime import IRuntime
|
||||
from .AutoGenConnector import AutoGenConnector
|
||||
from .autogen_connector import AutoGenConnector
|
||||
|
||||
|
||||
class AG2CAP(ConversableAgent):
|
|
@ -2,8 +2,8 @@ import time
|
|||
|
||||
from autogen import ConversableAgent
|
||||
|
||||
from ..DebugLog import Info, Warn
|
||||
from .CAP2AG import CAP2AG
|
||||
from ..debug_log import Info, Warn
|
||||
from .cap_to_ag import CAP2AG
|
||||
|
||||
|
||||
class Agent:
|
||||
|
|
|
@ -3,7 +3,7 @@ from typing import Dict, Optional, Union
|
|||
|
||||
from autogen import Agent
|
||||
|
||||
from ..ActorConnector import ActorConnector
|
||||
from ..actor_connector import IActorConnector
|
||||
from ..proto.Autogen_pb2 import GenReplyReq, GenReplyResp, PrepChat, ReceiveReq, Terminate
|
||||
|
||||
|
||||
|
@ -13,8 +13,8 @@ class AutoGenConnector:
|
|||
to/from the CAP system.
|
||||
"""
|
||||
|
||||
def __init__(self, cap_sender: ActorConnector):
|
||||
self._can_channel: ActorConnector = cap_sender
|
||||
def __init__(self, cap_sender: IActorConnector):
|
||||
self._can_channel: IActorConnector = cap_sender
|
||||
|
||||
def close(self):
|
||||
"""
|
|
@ -1,8 +1,8 @@
|
|||
from typing import List
|
||||
|
||||
from autogen import Agent, AssistantAgent, GroupChat
|
||||
from autogencap.ag_adapter.AG2CAP import AG2CAP
|
||||
from autogencap.ag_adapter.CAP2AG import CAP2AG
|
||||
from autogencap.ag_adapter.ag_to_cap import AG2CAP
|
||||
from autogencap.ag_adapter.cap_to_ag import CAP2AG
|
||||
|
||||
from ..actor_runtime import IRuntime
|
||||
|
|
@ -1,16 +1,16 @@
|
|||
import time
|
||||
|
||||
from autogen import GroupChatManager
|
||||
from autogencap.ActorConnector import ActorConnector
|
||||
from autogencap.ag_adapter.CAP2AG import CAP2AG
|
||||
from autogencap.ag_adapter.CAPGroupChat import CAPGroupChat
|
||||
from autogencap.actor_connector import IActorConnector
|
||||
from autogencap.ag_adapter.cap_group_chat import CAPGroupChat
|
||||
from autogencap.ag_adapter.cap_to_ag import CAP2AG
|
||||
|
||||
from ..actor_runtime import IRuntime
|
||||
|
||||
|
||||
class CAPGroupChatManager:
|
||||
def __init__(self, groupchat: CAPGroupChat, llm_config: dict, network: IRuntime):
|
||||
self._ensemble: IRuntime = network
|
||||
def __init__(self, groupchat: CAPGroupChat, llm_config: dict, runtime: IRuntime):
|
||||
self._runtime: IRuntime = runtime
|
||||
self._cap_group_chat: CAPGroupChat = groupchat
|
||||
self._ag_group_chat_manager: GroupChatManager = GroupChatManager(
|
||||
groupchat=self._cap_group_chat, llm_config=llm_config
|
||||
|
@ -21,11 +21,11 @@ class CAPGroupChatManager:
|
|||
init_chat=False,
|
||||
self_recursive=True,
|
||||
)
|
||||
self._ensemble.register(self._cap_proxy)
|
||||
self._runtime.register(self._cap_proxy)
|
||||
|
||||
def initiate_chat(self, txt_msg: str) -> None:
|
||||
self._ensemble.connect()
|
||||
user_proxy_conn: ActorConnector = self._ensemble.find_by_name(self._cap_group_chat.chat_initiator)
|
||||
self._runtime.connect()
|
||||
user_proxy_conn: IActorConnector = self._runtime.find_by_name(self._cap_group_chat.chat_initiator)
|
||||
user_proxy_conn.send_txt_msg(txt_msg)
|
||||
self._wait_for_user_exit()
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
from autogencap.ag_adapter.CAP2AG import CAP2AG
|
||||
from autogencap.ag_adapter.cap_to_ag import CAP2AG
|
||||
|
||||
|
||||
class CAPPair:
|
|
@ -5,10 +5,10 @@ from typing import Optional
|
|||
from autogen import ConversableAgent
|
||||
|
||||
from ..actor_runtime import IRuntime
|
||||
from ..DebugLog import Debug, Error, Info, Warn, shorten
|
||||
from ..debug_log import Debug, Error, Info, Warn, shorten
|
||||
from ..proto.Autogen_pb2 import GenReplyReq, GenReplyResp, PrepChat, ReceiveReq, Terminate
|
||||
from .AG2CAP import AG2CAP
|
||||
from .AGActor import AGActor
|
||||
from .ag_actor import AGActor
|
||||
from .ag_to_cap import AG2CAP
|
||||
|
||||
|
||||
class CAP2AG(AGActor):
|
||||
|
@ -27,15 +27,13 @@ class CAP2AG(AGActor):
|
|||
self.STATE = self.States.INIT
|
||||
self._can2ag_name: str = self.actor_name + ".can2ag"
|
||||
self._self_recursive: bool = self_recursive
|
||||
self._ensemble: IRuntime = None
|
||||
self._connectors = {}
|
||||
|
||||
def on_connect(self, ensemble: IRuntime):
|
||||
def on_connect(self):
|
||||
"""
|
||||
Connect to the AutoGen system.
|
||||
"""
|
||||
self._ensemble = ensemble
|
||||
self._ag2can_other_agent = AG2CAP(self._ensemble, self._other_agent_name)
|
||||
self._ag2can_other_agent = AG2CAP(self._runtime, self._other_agent_name)
|
||||
Debug(self._can2ag_name, "connected to {ensemble}")
|
||||
|
||||
def disconnect_network(self, ensemble: IRuntime):
|
||||
|
@ -117,7 +115,7 @@ class CAP2AG(AGActor):
|
|||
if topic in self._connectors:
|
||||
return self._connectors[topic]
|
||||
else:
|
||||
connector = self._ensemble.find_by_topic(topic)
|
||||
connector = self._runtime.find_by_topic(topic)
|
||||
self._connectors[topic] = connector
|
||||
return connector
|
||||
|
|
@ -3,8 +3,8 @@ import time
|
|||
|
||||
import zmq
|
||||
|
||||
from autogencap.Config import router_url, xpub_url, xsub_url
|
||||
from autogencap.DebugLog import Debug, Info, Warn
|
||||
from autogencap.config import router_url, xpub_url, xsub_url
|
||||
from autogencap.debug_log import Debug, Info, Warn
|
||||
|
||||
|
||||
class Broker:
|
|
@ -3,7 +3,7 @@ import threading
|
|||
|
||||
from termcolor import colored
|
||||
|
||||
import autogencap.Config as Config
|
||||
import autogencap.config as config
|
||||
|
||||
# Define log levels as constants
|
||||
DEBUG = 0
|
||||
|
@ -22,9 +22,9 @@ class BaseLogger:
|
|||
|
||||
def Log(self, level, context, msg):
|
||||
# Check if the current level meets the threshold
|
||||
if level >= Config.LOG_LEVEL: # Use the LOG_LEVEL from the Config module
|
||||
if level >= config.LOG_LEVEL: # Use the LOG_LEVEL from the Config module
|
||||
# Check if the context is in the list of ignored contexts
|
||||
if context in Config.IGNORED_LOG_CONTEXTS:
|
||||
if context in config.IGNORED_LOG_CONTEXTS:
|
||||
return
|
||||
with self._lock:
|
||||
self.WriteLog(level, context, msg)
|
||||
|
@ -34,7 +34,7 @@ class BaseLogger:
|
|||
|
||||
|
||||
class ConsoleLogger(BaseLogger):
|
||||
def __init__(self, use_color=True):
|
||||
def __init__(self, use_color=config.USE_COLOR_LOGGING):
|
||||
super().__init__()
|
||||
self._use_color = use_color
|
||||
|
||||
|
@ -56,6 +56,7 @@ class ConsoleLogger(BaseLogger):
|
|||
print(f"{thread_id} {timestamp} {level_name}: [{context}] {msg}")
|
||||
|
||||
|
||||
# Modify this line to disable color logging by default
|
||||
LOGGER = ConsoleLogger()
|
||||
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
from autogencap.actor_runtime import IRuntime
|
||||
from autogencap.constants import ZMQ_Runtime
|
||||
from autogencap.DebugLog import Error
|
||||
from autogencap.debug_log import Error
|
||||
from autogencap.zmq_runtime import ZMQRuntime
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from autogencap.DebugLog import Error
|
||||
from autogencap.debug_log import Error
|
||||
from autogencap.proto.CAP_pb2 import Error as ErrorMsg
|
||||
from autogencap.proto.CAP_pb2 import ErrorCode
|
||||
|
||||
|
|
|
@ -1,6 +1,3 @@
|
|||
# Agent_Sender takes a zmq context, Topic and creates a
|
||||
# socket that can publish to that topic. It exposes this functionality
|
||||
# using send_msg method
|
||||
import time
|
||||
import uuid
|
||||
from typing import Any, Dict
|
||||
|
@ -8,11 +5,12 @@ from typing import Any, Dict
|
|||
import zmq
|
||||
from zmq.utils.monitor import recv_monitor_message
|
||||
|
||||
from .Config import router_url, xpub_url, xsub_url
|
||||
from .DebugLog import Debug, Error, Info
|
||||
from .actor_connector import IActorConnector
|
||||
from .config import router_url, xpub_url, xsub_url
|
||||
from .debug_log import Debug, Error, Info
|
||||
|
||||
|
||||
class ActorSender:
|
||||
class ZMQActorSender:
|
||||
def __init__(self, context, topic):
|
||||
self._context = context
|
||||
self._topic = topic
|
||||
|
@ -73,12 +71,12 @@ class ActorSender:
|
|||
self._pub_socket.close()
|
||||
|
||||
|
||||
class ActorConnector:
|
||||
class ZMQActorConnector(IActorConnector):
|
||||
def __init__(self, context, topic):
|
||||
self._context = context
|
||||
self._topic = topic
|
||||
self._connect_sub_socket()
|
||||
self._sender = ActorSender(context, topic)
|
||||
self._sender = ZMQActorSender(context, topic)
|
||||
time.sleep(0.1) # Wait for the socket to connect
|
||||
|
||||
def _connect_sub_socket(self):
|
|
@ -4,12 +4,10 @@ import time
|
|||
|
||||
import zmq
|
||||
|
||||
from autogencap.Actor import Actor
|
||||
from autogencap.ActorConnector import ActorConnector, ActorSender
|
||||
from autogencap.Broker import Broker
|
||||
from autogencap.Config import router_url, xpub_url, xsub_url
|
||||
from autogencap.actor import Actor
|
||||
from autogencap.broker import Broker
|
||||
from autogencap.constants import Directory_Svc_Topic
|
||||
from autogencap.DebugLog import Debug, Error, Info
|
||||
from autogencap.debug_log import Debug, Error, Info
|
||||
from autogencap.proto.CAP_pb2 import (
|
||||
ActorInfo,
|
||||
ActorInfoCollection,
|
||||
|
@ -24,16 +22,18 @@ from autogencap.proto.CAP_pb2 import (
|
|||
Error as ErrorMsg,
|
||||
)
|
||||
from autogencap.utility import report_error_msg
|
||||
from autogencap.zmq_actor_connector import ZMQActorConnector, ZMQActorSender
|
||||
|
||||
# TODO (Future DirectorySv PR) use actor description, network_id, other properties to make directory
|
||||
# service more generic and powerful
|
||||
|
||||
|
||||
class DirectoryActor(Actor):
|
||||
def __init__(self, topic: str, name: str):
|
||||
class ZMQDirectoryActor(Actor):
|
||||
def __init__(self, topic: str, name: str, context: zmq.Context):
|
||||
super().__init__(topic, name)
|
||||
self._registered_actors = {}
|
||||
self._network_prefix = ""
|
||||
self._context = context
|
||||
|
||||
def on_bin_msg(self, msg: bytes, msg_type: str, topic: str, sender: str) -> bool:
|
||||
if msg_type == ActorRegistration.__name__:
|
||||
|
@ -50,7 +50,7 @@ class DirectoryActor(Actor):
|
|||
Info("DirectorySvc", f"Ping received: {sender_topic}")
|
||||
pong = Pong()
|
||||
serialized_msg = pong.SerializeToString()
|
||||
sender_connection = ActorSender(self._context, sender_topic)
|
||||
sender_connection = ZMQActorSender(self._context, sender_topic)
|
||||
sender_connection.send_bin_msg(Pong.__name__, serialized_msg)
|
||||
|
||||
def _on_actor_registration_msg(self, topic: str, msg_type: str, msg: bytes, sender_topic: str):
|
||||
|
@ -67,7 +67,7 @@ class DirectoryActor(Actor):
|
|||
else:
|
||||
self._registered_actors[name] = actor_reg.actor_info
|
||||
|
||||
sender_connection = ActorSender(self._context, sender_topic)
|
||||
sender_connection = ZMQActorSender(self._context, sender_topic)
|
||||
serialized_msg = err.SerializeToString()
|
||||
sender_connection.send_bin_msg(ErrorMsg.__name__, serialized_msg)
|
||||
|
||||
|
@ -96,16 +96,16 @@ class DirectoryActor(Actor):
|
|||
else:
|
||||
Error("DirectorySvc", f"Actor not found: {actor_lookup.actor_info.name}")
|
||||
|
||||
sender_connection = ActorSender(self._context, sender_topic)
|
||||
sender_connection = ZMQActorSender(self._context, sender_topic)
|
||||
serialized_msg = actor_lookup_resp.SerializeToString()
|
||||
sender_connection.send_bin_msg(ActorLookupResponse.__name__, serialized_msg)
|
||||
|
||||
|
||||
class DirectorySvc:
|
||||
class ZMQDirectorySvc:
|
||||
def __init__(self, context: zmq.Context = zmq.Context()):
|
||||
self._context: zmq.Context = context
|
||||
self._directory_connector: ActorConnector = None
|
||||
self._directory_actor: DirectoryActor = None
|
||||
self._directory_connector: ZMQActorConnector = None
|
||||
self._directory_actor: ZMQDirectoryActor = None
|
||||
|
||||
def _no_other_directory(self) -> bool:
|
||||
Debug("DirectorySvc", "Pinging existing DirectorySvc")
|
||||
|
@ -116,12 +116,14 @@ class DirectorySvc:
|
|||
return True
|
||||
return False
|
||||
|
||||
def start(self):
|
||||
def start(self, runtime):
|
||||
Debug("DirectorySvc", "Starting.")
|
||||
self._directory_connector = ActorConnector(self._context, Directory_Svc_Topic)
|
||||
self._directory_connector = ZMQActorConnector(self._context, Directory_Svc_Topic)
|
||||
if self._no_other_directory():
|
||||
self._directory_actor = DirectoryActor(Directory_Svc_Topic, "Directory Service")
|
||||
self._directory_actor.on_start(self._context)
|
||||
self._directory_actor = ZMQDirectoryActor(
|
||||
Directory_Svc_Topic, "Directory Service", self._context
|
||||
) # Update this line
|
||||
self._directory_actor.on_start(runtime)
|
||||
Info("DirectorySvc", "Directory service started.")
|
||||
else:
|
||||
Info("DirectorySvc", "Another directory service is running. This instance will not start.")
|
||||
|
@ -176,15 +178,8 @@ def main():
|
|||
proxy: Broker = Broker(context)
|
||||
proxy.start()
|
||||
# Start the directory service
|
||||
directory_svc = DirectorySvc(context)
|
||||
directory_svc = ZMQDirectorySvc(context)
|
||||
directory_svc.start()
|
||||
# # How do you register an actor?
|
||||
# directory_svc.register_actor_by_name("my_actor")
|
||||
#
|
||||
# # How do you look up an actor?
|
||||
# actor: ActorInfo = directory_svc.lookup_actor_by_name("my_actor")
|
||||
# if actor is not None:
|
||||
# Info("main", f"Found actor: {actor.name}")
|
||||
|
||||
# DirectorySvc is running in a separate thread. Here we are watching the
|
||||
# status and printing status every few seconds. This is
|
|
@ -0,0 +1,51 @@
|
|||
# ZMQ implementation of the message receiver
|
||||
import threading
|
||||
|
||||
import zmq
|
||||
|
||||
from autogencap.actor_runtime import IMessageReceiver
|
||||
from autogencap.config import xpub_url
|
||||
from autogencap.debug_log import Debug, Error
|
||||
|
||||
|
||||
class ZMQMsgReceiver(IMessageReceiver):
|
||||
def __init__(self, context: zmq.Context):
|
||||
self._socket = None
|
||||
self._context = context
|
||||
self._start_event = threading.Event()
|
||||
self.run = False
|
||||
|
||||
def init(self, actor_name: str):
|
||||
"""Initialize the ZMQ message receiver."""
|
||||
self.actor_name = actor_name
|
||||
self._socket = self._context.socket(zmq.SUB)
|
||||
self._socket.setsockopt(zmq.RCVTIMEO, 500)
|
||||
self._socket.connect(xpub_url)
|
||||
str_topic = f"{self.actor_name}"
|
||||
self.add_listener(str_topic)
|
||||
self._start_event.set()
|
||||
|
||||
def add_listener(self, topic: str):
|
||||
"""Add a topic to the message receiver."""
|
||||
Debug(self.actor_name, f"subscribe to: {topic}")
|
||||
self._socket.setsockopt_string(zmq.SUBSCRIBE, f"{topic}")
|
||||
|
||||
def get_message(self):
|
||||
"""Retrieve a message from the ZMQ socket."""
|
||||
try:
|
||||
topic, msg_type, sender_topic, msg = self._socket.recv_multipart()
|
||||
topic = topic.decode("utf-8") # Convert bytes to string
|
||||
msg_type = msg_type.decode("utf-8") # Convert bytes to string
|
||||
sender_topic = sender_topic.decode("utf-8") # Convert bytes to string
|
||||
except zmq.Again:
|
||||
return None # No message received, continue to next iteration
|
||||
except Exception as e:
|
||||
Error(self.actor_name, f"recv thread encountered an error: {e}")
|
||||
return None
|
||||
return topic, msg_type, sender_topic, msg
|
||||
|
||||
def stop(self):
|
||||
"""Stop the ZMQ message receiver."""
|
||||
self.run = False
|
||||
self._socket.setsockopt(zmq.LINGER, 0)
|
||||
self._socket.close()
|
|
@ -3,27 +3,31 @@ from typing import List
|
|||
|
||||
import zmq
|
||||
|
||||
from .Actor import Actor
|
||||
from .actor_runtime import IRuntime
|
||||
from .ActorConnector import ActorConnector
|
||||
from .Broker import Broker
|
||||
from .actor import Actor
|
||||
from .actor_connector import IActorConnector
|
||||
from .actor_runtime import IMessageReceiver, IRuntime
|
||||
from .broker import Broker
|
||||
from .constants import Termination_Topic
|
||||
from .DebugLog import Debug, Warn
|
||||
from .DirectorySvc import DirectorySvc
|
||||
from .debug_log import Debug, Warn
|
||||
from .proto.CAP_pb2 import ActorInfo, ActorInfoCollection
|
||||
from .zmq_actor_connector import ZMQActorConnector
|
||||
|
||||
|
||||
class ZMQRuntime(IRuntime):
|
||||
def __init__(self, name: str = "Local Actor Network", start_broker: bool = True):
|
||||
def __init__(self, start_broker: bool = True):
|
||||
self.local_actors = {}
|
||||
self.name: str = name
|
||||
self._context: zmq.Context = zmq.Context()
|
||||
self._start_broker: bool = start_broker
|
||||
self._broker: Broker = None
|
||||
self._directory_svc: DirectorySvc = None
|
||||
self._directory_svc = None
|
||||
self._log_name = self.__class__.__name__
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.name}"
|
||||
return f" \
|
||||
{self._log_name}\n \
|
||||
is_broker: {self._broker is not None}\n \
|
||||
is_directory_svc: {self._directory_svc is not None}\n \
|
||||
local_actors: {self.local_actors}\n"
|
||||
|
||||
def _init_runtime(self):
|
||||
if self._start_broker and self._broker is None:
|
||||
|
@ -32,23 +36,28 @@ class ZMQRuntime(IRuntime):
|
|||
self._start_broker = False # Don't try to start the broker again
|
||||
self._broker = None
|
||||
if self._directory_svc is None:
|
||||
self._directory_svc = DirectorySvc(self._context)
|
||||
self._directory_svc.start()
|
||||
from .zmq_directory_svc import ZMQDirectorySvc
|
||||
|
||||
self._directory_svc = ZMQDirectorySvc(self._context)
|
||||
self._directory_svc.start(self)
|
||||
time.sleep(0.25) # Process queued thread events in Broker and Directory
|
||||
|
||||
def register(self, actor: Actor):
|
||||
self._init_runtime()
|
||||
# Get actor's name and description and add to a dictionary so
|
||||
# that we can look up the actor by name
|
||||
self._directory_svc.register_actor_by_name(actor.actor_name)
|
||||
self.local_actors[actor.actor_name] = actor
|
||||
actor.on_start(self._context)
|
||||
Debug("Local_Actor_Network", f"{actor.actor_name} registered in the network.")
|
||||
actor.on_start(self) # Pass self (the runtime) to on_start
|
||||
Debug(self._log_name, f"{actor.actor_name} registered in the network.")
|
||||
|
||||
def get_new_msg_receiver(self) -> IMessageReceiver:
|
||||
from .zmq_msg_receiver import ZMQMsgReceiver
|
||||
|
||||
return ZMQMsgReceiver(self._context)
|
||||
|
||||
def connect(self):
|
||||
self._init_runtime()
|
||||
for actor in self.local_actors.values():
|
||||
actor.on_connect(self)
|
||||
actor.on_connect()
|
||||
|
||||
def disconnect(self):
|
||||
for actor in self.local_actors.values():
|
||||
|
@ -58,27 +67,27 @@ class ZMQRuntime(IRuntime):
|
|||
if self._broker:
|
||||
self._broker.stop()
|
||||
|
||||
def find_by_topic(self, topic: str) -> ActorConnector:
|
||||
return ActorConnector(self._context, topic)
|
||||
def find_by_topic(self, topic: str) -> IActorConnector:
|
||||
return ZMQActorConnector(self._context, topic)
|
||||
|
||||
def find_by_name(self, name: str) -> ActorConnector:
|
||||
def find_by_name(self, name: str) -> IActorConnector:
|
||||
actor_info: ActorInfo = self._directory_svc.lookup_actor_by_name(name)
|
||||
if actor_info is None:
|
||||
Warn("Local_Actor_Network", f"{name}, not found in the network.")
|
||||
Warn(self._log_name, f"{name}, not found in the network.")
|
||||
return None
|
||||
Debug("Local_Actor_Network", f"[{name}] found in the network.")
|
||||
Debug(self._log_name, f"[{name}] found in the network.")
|
||||
return self.find_by_topic(name)
|
||||
|
||||
def find_termination(self) -> ActorConnector:
|
||||
def find_termination(self) -> IActorConnector:
|
||||
termination_topic: str = Termination_Topic
|
||||
return self.find_by_topic(termination_topic)
|
||||
|
||||
def find_by_name_regex(self, name_regex) -> List[ActorInfo]:
|
||||
actor_info: ActorInfoCollection = self._directory_svc.lookup_actor_info_by_name(name_regex)
|
||||
if actor_info is None:
|
||||
Warn("Local_Actor_Network", f"{name_regex}, not found in the network.")
|
||||
Warn(self._log_name, f"{name_regex}, not found in the network.")
|
||||
return None
|
||||
Debug("Local_Actor_Network", f"[{name_regex}] found in the network.")
|
||||
Debug(self._log_name, f"[{name_regex}] found in the network.")
|
||||
actor_list = []
|
||||
for actor in actor_info.info_coll:
|
||||
actor_list.append(actor)
|
||||
|
|
|
@ -5,16 +5,16 @@ Demo App
|
|||
import argparse
|
||||
|
||||
import _paths
|
||||
import autogencap.Config as Config
|
||||
import autogencap.DebugLog as DebugLog
|
||||
from AGDemo import ag_demo
|
||||
from AGGroupChatDemo import ag_groupchat_demo
|
||||
from CAPAutGenGroupDemo import cap_ag_group_demo
|
||||
from CAPAutoGenPairDemo import cap_ag_pair_demo
|
||||
from ComplexActorDemo import complex_actor_demo
|
||||
import autogencap.config as config
|
||||
import autogencap.debug_log as debug_log
|
||||
from ag_demo import ag_demo
|
||||
from ag_group_chat_demo import ag_groupchat_demo
|
||||
from cap_autogen_group_demo import cap_ag_group_demo
|
||||
from cap_autogen_pair_demo import cap_ag_pair_demo
|
||||
from complex_actor_demo import complex_actor_demo
|
||||
from list_agents import list_agents
|
||||
from RemoteAGDemo import remote_ag_demo
|
||||
from SimpleActorDemo import simple_actor_demo
|
||||
from remote_autogen_demo import remote_ag_demo
|
||||
from simple_actor_demo import simple_actor_demo
|
||||
from single_threaded import single_threaded_demo
|
||||
|
||||
####################################################################################################
|
||||
|
@ -28,8 +28,8 @@ def parse_args():
|
|||
args = parser.parse_args()
|
||||
# Set the log level
|
||||
# Print log level string based on names in debug_log.py
|
||||
print(f"Log level: {DebugLog.LEVEL_NAMES[args.log_level]}")
|
||||
Config.LOG_LEVEL = args.log_level
|
||||
print(f"Log level: {debug_log.LEVEL_NAMES[args.log_level]}")
|
||||
config.LOG_LEVEL = args.log_level
|
||||
# Config.IGNORED_LOG_CONTEXTS.extend(["BROKER"])
|
||||
|
||||
|
|
@ -4,11 +4,10 @@ Each agent represents a different role and knows how to connect to external syst
|
|||
to retrieve information.
|
||||
"""
|
||||
|
||||
from autogencap.Actor import Actor
|
||||
from autogencap.actor import Actor
|
||||
from autogencap.actor_connector import IActorConnector
|
||||
from autogencap.actor_runtime import IRuntime
|
||||
from autogencap.ActorConnector import ActorConnector
|
||||
from autogencap.DebugLog import Debug, Info, shorten
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
from autogencap.debug_log import Debug, Info, shorten
|
||||
|
||||
|
||||
class GreeterAgent(Actor):
|
||||
|
@ -132,23 +131,23 @@ class PersonalAssistant(Actor):
|
|||
description="This is the personal assistant, who knows how to connect to the other agents and get information from them.",
|
||||
):
|
||||
super().__init__(agent_name, description)
|
||||
self.fidelity: ActorConnector = None
|
||||
self.financial_planner: ActorConnector = None
|
||||
self.quant: ActorConnector = None
|
||||
self.risk_manager: ActorConnector = None
|
||||
self.fidelity: IActorConnector = None
|
||||
self.financial_planner: IActorConnector = None
|
||||
self.quant: IActorConnector = None
|
||||
self.risk_manager: IActorConnector = None
|
||||
|
||||
def on_connect(self, network: IRuntime):
|
||||
def on_connect(self):
|
||||
"""
|
||||
Connects the personal assistant to the specified local actor network.
|
||||
|
||||
Args:
|
||||
network (LocalActorNetwork): The local actor network to connect to.
|
||||
"""
|
||||
Debug(self.actor_name, f"is connecting to {network}")
|
||||
self.fidelity = network.find_by_name("Fidelity")
|
||||
self.financial_planner = network.find_by_name("Financial Planner")
|
||||
self.quant = network.find_by_name("Quant")
|
||||
self.risk_manager = network.find_by_name("Risk Manager")
|
||||
Debug(self.actor_name, f"is connecting to {self._runtime}")
|
||||
self.fidelity = self._runtime.find_by_name("Fidelity")
|
||||
self.financial_planner = self._runtime.find_by_name("Financial Planner")
|
||||
self.quant = self._runtime.find_by_name("Quant")
|
||||
self.risk_manager = self._runtime.find_by_name("Risk Manager")
|
||||
Debug(self.actor_name, "connected")
|
||||
|
||||
def disconnect_network(self, network: IRuntime):
|
|
@ -1,6 +1,6 @@
|
|||
from autogencap.ag_adapter.CAPGroupChat import CAPGroupChat
|
||||
from autogencap.ag_adapter.CAPGroupChatManager import CAPGroupChatManager
|
||||
from autogencap.DebugLog import Info
|
||||
from autogencap.ag_adapter.cap_group_chat import CAPGroupChat
|
||||
from autogencap.ag_adapter.cap_group_chat_manager import CAPGroupChatManager
|
||||
from autogencap.debug_log import Info
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
||||
|
@ -35,7 +35,7 @@ def cap_ag_group_demo():
|
|||
cap_groupchat = CAPGroupChat(
|
||||
agents=[user_proxy, coder, pm], messages=[], max_round=12, ensemble=ensemble, chat_initiator=user_proxy.name
|
||||
)
|
||||
manager = CAPGroupChatManager(groupchat=cap_groupchat, llm_config=gpt4_config, network=ensemble)
|
||||
manager = CAPGroupChatManager(groupchat=cap_groupchat, llm_config=gpt4_config, runtime=ensemble)
|
||||
manager.initiate_chat("Find a latest paper about gpt-4 on arxiv and find its potential applications in software.")
|
||||
ensemble.disconnect()
|
||||
Info("App", "App Exit")
|
|
@ -1,15 +1,15 @@
|
|||
import time
|
||||
|
||||
import autogencap.DebugLog as DebugLog
|
||||
from autogencap.ag_adapter.CAPPair import CAPPair
|
||||
from autogencap.DebugLog import ConsoleLogger, Info
|
||||
import autogencap.debug_log as debug_log
|
||||
from autogencap.ag_adapter.cap_pair import CAPPair
|
||||
from autogencap.debug_log import ConsoleLogger, Info
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
|
||||
|
||||
|
||||
def cap_ag_pair_demo():
|
||||
DebugLog.LOGGER = ConsoleLogger(use_color=False)
|
||||
debug_log.LOGGER = ConsoleLogger(use_color=False)
|
||||
|
||||
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
|
||||
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
|
|
@ -1,6 +1,6 @@
|
|||
import time
|
||||
|
||||
from AppAgents import FidelityAgent, FinancialPlannerAgent, PersonalAssistant, QuantAgent, RiskManager
|
||||
from app_agents import FidelityAgent, FinancialPlannerAgent, PersonalAssistant, QuantAgent, RiskManager
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
from termcolor import colored
|
||||
|
||||
|
@ -14,17 +14,17 @@ def complex_actor_demo():
|
|||
sends them to the personal assistant agent, and terminates
|
||||
when the user enters "quit".
|
||||
"""
|
||||
ensemble = RuntimeFactory.get_runtime("ZMQ")
|
||||
runtime = RuntimeFactory.get_runtime("ZMQ")
|
||||
# Register agents
|
||||
ensemble.register(PersonalAssistant())
|
||||
ensemble.register(FidelityAgent())
|
||||
ensemble.register(FinancialPlannerAgent())
|
||||
ensemble.register(RiskManager())
|
||||
ensemble.register(QuantAgent())
|
||||
runtime.register(PersonalAssistant())
|
||||
runtime.register(FidelityAgent())
|
||||
runtime.register(FinancialPlannerAgent())
|
||||
runtime.register(RiskManager())
|
||||
runtime.register(QuantAgent())
|
||||
# Tell agents to connect to other agents
|
||||
ensemble.connect()
|
||||
runtime.connect()
|
||||
# Get a channel to the personal assistant agent
|
||||
pa = ensemble.find_by_name(PersonalAssistant.cls_agent_name)
|
||||
pa = runtime.find_by_name(PersonalAssistant.cls_agent_name)
|
||||
info_msg = """
|
||||
This is an imaginary personal assistant agent scenario.
|
||||
Five actors are connected in a self-determined graph. The user
|
||||
|
@ -48,4 +48,4 @@ def complex_actor_demo():
|
|||
# Cleanup
|
||||
|
||||
pa.close()
|
||||
ensemble.disconnect()
|
||||
runtime.disconnect()
|
|
@ -1,8 +1,8 @@
|
|||
import time
|
||||
from typing import List
|
||||
|
||||
from AppAgents import FidelityAgent, GreeterAgent
|
||||
from autogencap.DebugLog import Info
|
||||
from app_agents import FidelityAgent, GreeterAgent
|
||||
from autogencap.debug_log import Info
|
||||
from autogencap.proto.CAP_pb2 import ActorInfo
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
from AppAgents import GreeterAgent
|
||||
from app_agents import GreeterAgent
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
import _paths
|
||||
from AppAgents import GreeterAgent
|
||||
from autogencap.DebugLog import Error
|
||||
from app_agents import GreeterAgent
|
||||
from autogencap.debug_log import Error
|
||||
from autogencap.proto.CAP_pb2 import Ping
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
import time
|
||||
|
||||
import _paths
|
||||
from autogencap.ag_adapter.CAP2AG import CAP2AG
|
||||
from autogencap.Config import IGNORED_LOG_CONTEXTS
|
||||
from autogencap.DebugLog import Info
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
from autogen import UserProxyAgent, config_list_from_json
|
||||
|
||||
|
||||
# Starts the Broker and the Assistant. The UserProxy is started separately.
|
||||
class StandaloneUserProxy:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def run(self):
|
||||
print("Running the StandaloneUserProxy")
|
||||
|
||||
user_proxy = UserProxyAgent(
|
||||
"user_proxy",
|
||||
code_execution_config={"work_dir": "coding"},
|
||||
is_termination_msg=lambda x: "TERMINATE" in x.get("content"),
|
||||
)
|
||||
# Composable Agent Network adapter
|
||||
ensemble = RuntimeFactory.get_runtime("ZMQ")
|
||||
user_proxy_adptr = CAP2AG(ag_agent=user_proxy, the_other_name="assistant", init_chat=True, self_recursive=True)
|
||||
ensemble.register(user_proxy_adptr)
|
||||
ensemble.connect()
|
||||
|
||||
# Send a message to the user_proxy
|
||||
user_proxy_conn = ensemble.find_by_name("user_proxy")
|
||||
example = "Plot a chart of MSFT daily closing prices for last 1 Month."
|
||||
print(f"Example: {example}")
|
||||
try:
|
||||
user_input = input("Please enter your command: ")
|
||||
if user_input == "":
|
||||
user_input = example
|
||||
print(f"Sending: {user_input}")
|
||||
user_proxy_conn.send_txt_msg(user_input)
|
||||
|
||||
# Hang around for a while
|
||||
while user_proxy_adptr.run:
|
||||
time.sleep(0.5)
|
||||
except KeyboardInterrupt:
|
||||
print("Interrupted by user, shutting down.")
|
||||
ensemble.disconnect()
|
||||
Info("StandaloneUserProxy", "App Exit")
|
||||
|
||||
|
||||
def main():
|
||||
IGNORED_LOG_CONTEXTS.extend(["BROKER", "DirectorySvc"])
|
||||
assistant = StandaloneUserProxy()
|
||||
assistant.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,8 +1,8 @@
|
|||
import time
|
||||
|
||||
import _paths
|
||||
from autogencap.ag_adapter.CAP2AG import CAP2AG
|
||||
from autogencap.DebugLog import Info
|
||||
from autogencap.ag_adapter.cap_to_ag import CAP2AG
|
||||
from autogencap.debug_log import Info
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
from autogen import AssistantAgent, config_list_from_json
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import _paths
|
||||
from autogencap.Broker import main
|
||||
from autogencap.broker import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -1,5 +1,5 @@
|
|||
import _paths
|
||||
from autogencap.DirectorySvc import main
|
||||
from autogencap.zmq_directory_svc import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
|
@ -2,7 +2,7 @@ import time
|
|||
|
||||
import _paths
|
||||
from autogencap.ag_adapter.agent import Agent
|
||||
from autogencap.Config import IGNORED_LOG_CONTEXTS
|
||||
from autogencap.config import IGNORED_LOG_CONTEXTS
|
||||
from autogencap.runtime_factory import RuntimeFactory
|
||||
|
||||
from autogen import UserProxyAgent
|
||||
|
|
|
@ -4,7 +4,7 @@ from typing import Any, Dict
|
|||
|
||||
import _paths
|
||||
import zmq
|
||||
from autogencap.Config import dealer_url, router_url, xpub_url, xsub_url
|
||||
from autogencap.config import dealer_url, router_url, xpub_url, xsub_url
|
||||
from zmq.utils.monitor import recv_monitor_message
|
||||
|
||||
|
||||
|
|
1
setup.py
1
setup.py
|
@ -107,6 +107,7 @@ extra_require = {
|
|||
"cohere": ["cohere>=5.5.8"],
|
||||
"ollama": ["ollama>=0.3.3", "fix_busted_json>=0.0.18"],
|
||||
"bedrock": ["boto3>=1.34.149"],
|
||||
"kubernetes": ["kubernetes>=27.2.0"],
|
||||
}
|
||||
|
||||
setuptools.setup(
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
# Test Environment for autogen.coding.kubernetes.PodCommandLineCodeExecutor
|
||||
|
||||
To test PodCommandLineCodeExecutor, the following environment is required.
|
||||
- kubernetes cluster config file
|
||||
- autogen package
|
||||
|
||||
## kubernetes cluster config file
|
||||
|
||||
kubernetes cluster config file, kubeconfig file's location should be set on environment variable `KUBECONFIG` or
|
||||
It must be located in the .kube/config path of your home directory.
|
||||
|
||||
For Windows, `C:\Users\<<user>>\.kube\config`,
|
||||
For Linux or MacOS, place the kubeconfig file in the `/home/<<user>>/.kube/config` directory.
|
||||
|
||||
## package install
|
||||
|
||||
Clone autogen github repository for package install and testing
|
||||
|
||||
Clone the repository with the command below.
|
||||
|
||||
before contribution
|
||||
```sh
|
||||
git clone -b k8s-code-executor https://github.com/questcollector/autogen.git
|
||||
```
|
||||
|
||||
after contribution
|
||||
```sh
|
||||
git clone https://github.com/microsoft/autogen.git
|
||||
```
|
||||
|
||||
install autogen with kubernetes >= 27.0.2
|
||||
|
||||
```sh
|
||||
cd autogen
|
||||
pip install .[kubernetes] -U
|
||||
```
|
||||
|
||||
## test execution
|
||||
|
||||
Perform the test with the following command
|
||||
|
||||
```sh
|
||||
pytest test/coding/test_kubernetes_commandline_code_executor.py
|
||||
```
|
|
@ -0,0 +1,203 @@
|
|||
import importlib
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from autogen.code_utils import TIMEOUT_MSG
|
||||
from autogen.coding.base import CodeBlock, CodeExecutor
|
||||
|
||||
try:
|
||||
from autogen.coding.kubernetes import PodCommandLineCodeExecutor
|
||||
|
||||
client = importlib.import_module("kubernetes.client")
|
||||
config = importlib.import_module("kubernetes.config")
|
||||
|
||||
kubeconfig = Path(".kube/config")
|
||||
if os.environ.get("KUBECONFIG", None):
|
||||
kubeconfig = Path(os.environ["KUBECONFIG"])
|
||||
elif sys.platform == "win32":
|
||||
kubeconfig = os.environ["userprofile"] / kubeconfig
|
||||
else:
|
||||
kubeconfig = os.environ["HOME"] / kubeconfig
|
||||
|
||||
if kubeconfig.is_file():
|
||||
config.load_config(config_file=str(kubeconfig))
|
||||
api_client = client.CoreV1Api()
|
||||
api_client.list_namespace()
|
||||
skip_kubernetes_tests = False
|
||||
else:
|
||||
skip_kubernetes_tests = True
|
||||
|
||||
pod_spec = client.V1Pod(
|
||||
metadata=client.V1ObjectMeta(
|
||||
name="abcd", namespace="default", annotations={"sidecar.istio.io/inject": "false"}
|
||||
),
|
||||
spec=client.V1PodSpec(
|
||||
restart_policy="Never",
|
||||
containers=[
|
||||
client.V1Container(
|
||||
args=["-c", "while true;do sleep 5; done"],
|
||||
command=["/bin/sh"],
|
||||
name="abcd",
|
||||
image="python:3.11-slim",
|
||||
env=[
|
||||
client.V1EnvVar(name="TEST", value="TEST"),
|
||||
client.V1EnvVar(
|
||||
name="POD_NAME",
|
||||
value_from=client.V1EnvVarSource(
|
||||
field_ref=client.V1ObjectFieldSelector(field_path="metadata.name")
|
||||
),
|
||||
),
|
||||
],
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
skip_kubernetes_tests = True
|
||||
|
||||
|
||||
@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible")
|
||||
def test_create_default_pod_executor():
|
||||
with PodCommandLineCodeExecutor(namespace="default", kube_config_file=str(kubeconfig)) as executor:
|
||||
assert executor.timeout == 60
|
||||
assert executor.work_dir == Path("/workspace")
|
||||
assert executor._container_name == "autogen-code-exec"
|
||||
assert executor._pod.metadata.name.startswith("autogen-code-exec-")
|
||||
_test_execute_code(executor)
|
||||
|
||||
|
||||
@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible")
|
||||
def test_create_node_pod_executor():
|
||||
with PodCommandLineCodeExecutor(
|
||||
image="node:22-alpine",
|
||||
namespace="default",
|
||||
work_dir="./app",
|
||||
timeout=30,
|
||||
kube_config_file=str(kubeconfig),
|
||||
execution_policies={"javascript": True},
|
||||
) as executor:
|
||||
assert executor.timeout == 30
|
||||
assert executor.work_dir == Path("./app")
|
||||
assert executor._container_name == "autogen-code-exec"
|
||||
assert executor._pod.metadata.name.startswith("autogen-code-exec-")
|
||||
assert executor.execution_policies["javascript"]
|
||||
|
||||
# Test single code block.
|
||||
code_blocks = [CodeBlock(code="console.log('hello world!')", language="javascript")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None
|
||||
|
||||
# Test multiple code blocks.
|
||||
code_blocks = [
|
||||
CodeBlock(code="console.log('hello world!')", language="javascript"),
|
||||
CodeBlock(code="let a = 100 + 100; console.log(a)", language="javascript"),
|
||||
]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert (
|
||||
code_result.exit_code == 0
|
||||
and "hello world!" in code_result.output
|
||||
and "200" in code_result.output
|
||||
and code_result.code_file is not None
|
||||
)
|
||||
|
||||
# Test running code.
|
||||
file_lines = ["console.log('hello world!')", "let a = 100 + 100", "console.log(a)"]
|
||||
code_blocks = [CodeBlock(code="\n".join(file_lines), language="javascript")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert (
|
||||
code_result.exit_code == 0
|
||||
and "hello world!" in code_result.output
|
||||
and "200" in code_result.output
|
||||
and code_result.code_file is not None
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible")
|
||||
def test_create_pod_spec_pod_executor():
|
||||
with PodCommandLineCodeExecutor(
|
||||
pod_spec=pod_spec, container_name="abcd", kube_config_file=str(kubeconfig)
|
||||
) as executor:
|
||||
assert executor.timeout == 60
|
||||
assert executor._container_name == "abcd"
|
||||
assert executor._pod.metadata.name == pod_spec.metadata.name
|
||||
assert executor._pod.metadata.namespace == pod_spec.metadata.namespace
|
||||
_test_execute_code(executor)
|
||||
|
||||
# Test bash script.
|
||||
if sys.platform not in ["win32"]:
|
||||
code_blocks = [CodeBlock(code="echo $TEST $POD_NAME", language="bash")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert (
|
||||
code_result.exit_code == 0 and "TEST abcd" in code_result.output and code_result.code_file is not None
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif(skip_kubernetes_tests, reason="kubernetes not accessible")
|
||||
def test_pod_executor_timeout():
|
||||
with PodCommandLineCodeExecutor(namespace="default", timeout=5, kube_config_file=str(kubeconfig)) as executor:
|
||||
assert executor.timeout == 5
|
||||
assert executor.work_dir == Path("/workspace")
|
||||
assert executor._container_name == "autogen-code-exec"
|
||||
assert executor._pod.metadata.name.startswith("autogen-code-exec-")
|
||||
# Test running code.
|
||||
file_lines = ["import time", "time.sleep(10)", "a = 100 + 100", "print(a)"]
|
||||
code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert code_result.exit_code == 124 and TIMEOUT_MSG in code_result.output and code_result.code_file is not None
|
||||
|
||||
|
||||
def _test_execute_code(executor: CodeExecutor) -> None:
|
||||
# Test single code block.
|
||||
code_blocks = [CodeBlock(code="import sys; print('hello world!')", language="python")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None
|
||||
|
||||
# Test multiple code blocks.
|
||||
code_blocks = [
|
||||
CodeBlock(code="import sys; print('hello world!')", language="python"),
|
||||
CodeBlock(code="a = 100 + 100; print(a)", language="python"),
|
||||
]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert (
|
||||
code_result.exit_code == 0
|
||||
and "hello world!" in code_result.output
|
||||
and "200" in code_result.output
|
||||
and code_result.code_file is not None
|
||||
)
|
||||
|
||||
# Test bash script.
|
||||
if sys.platform not in ["win32"]:
|
||||
code_blocks = [CodeBlock(code="echo 'hello world!'", language="bash")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert code_result.exit_code == 0 and "hello world!" in code_result.output and code_result.code_file is not None
|
||||
|
||||
# Test running code.
|
||||
file_lines = ["import sys", "print('hello world!')", "a = 100 + 100", "print(a)"]
|
||||
code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert (
|
||||
code_result.exit_code == 0
|
||||
and "hello world!" in code_result.output
|
||||
and "200" in code_result.output
|
||||
and code_result.code_file is not None
|
||||
)
|
||||
|
||||
# Test running code has filename.
|
||||
file_lines = ["# filename: test.py", "import sys", "print('hello world!')", "a = 100 + 100", "print(a)"]
|
||||
code_blocks = [CodeBlock(code="\n".join(file_lines), language="python")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
print(code_result.code_file)
|
||||
assert (
|
||||
code_result.exit_code == 0
|
||||
and "hello world!" in code_result.output
|
||||
and "200" in code_result.output
|
||||
and code_result.code_file.find("test.py") > 0
|
||||
)
|
||||
|
||||
# Test error code.
|
||||
code_blocks = [CodeBlock(code="print(sys.platform)", language="python")]
|
||||
code_result = executor.execute_code_blocks(code_blocks)
|
||||
assert code_result.exit_code == 1 and "Traceback" in code_result.output and code_result.code_file is not None
|
|
@ -406,3 +406,4 @@ You can check out more example notebooks for RAG use cases:
|
|||
- [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb)
|
||||
- [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb)
|
||||
- [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb)
|
||||
- [Using RetrieveChat Powered by Couchbase for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_couchbase.ipynb)
|
||||
|
|
|
@ -0,0 +1,102 @@
|
|||
# Agent Memory with Zep
|
||||
|
||||
<img src="https://raw.githubusercontent.com/getzep/zep/refs/heads/main/assets/zep-logo-icon-gradient-rgb.svg?raw=true" style="width: 20%;" alt="Zep logo"/>
|
||||
|
||||
[Zep](https://www.getzep.com/?utm_source=autogen) is a long-term memory service for agentic applications used by both startups and enterprises. With Zep, you can build personalized, accurate, and production-ready agent applications.
|
||||
|
||||
Zep's memory continuously learns facts from interactions with users and your changing business data. With [just two API calls](https://help.getzep.com/memory?utm_source=autogen), you can persist chat history to Zep and recall facts relevant to the state of your agent.
|
||||
|
||||
Zep is powered by a temporal Knowledge Graph that allows reasoning with facts as they change. A combination of semantic and graph search enables accurate and low-latency fact retrieval.
|
||||
|
||||
Sign up for [Zep Cloud](https://www.getzep.com/?utm_source=autogen) or visit the [Zep Community Edition Repo](https://github.com/getzep/zep).
|
||||
|
||||
| Feature | Description |
|
||||
| ---------------------------------------------- | ------------------------------------------------------------------------------------- |
|
||||
| 💬 **Capture Detailed Conversational Context** | Zep's Knowledge Graph-based memory captures episodic, semantic, and temporal contexts |
|
||||
| 🗄️ **Business Data is Context, too** | Zep is able to extract facts from JSON and unstructured text as well |
|
||||
| ⚙️ **Tailor For Your Business** | Fact Ratings and other tools allow you to fine-tune retrieval for your use case |
|
||||
| ⚡️ **Instant Memory Retrieval** | Retrieve relevant facts in under 100ms |
|
||||
| 🔐 **Compliance & Security** | User Privacy Management, SOC 2 Type II certification, and other controls |
|
||||
| 🖼️ **Framework Agnostic & Future-Proof** | Use with AutoGen or any other framework, current or future |
|
||||
|
||||
<details>
|
||||
<summary><b><u>Zep Community Edition Walkthrough</u></b></summary>
|
||||
<a href="https://vimeo.com/1013045013">
|
||||
<img src="img/ecosystem-zep-ce-walkthrough.png" alt="Zep Fact Ratings" />
|
||||
</a>
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary><b><u>User Chat Session and Facts</u></b></summary>
|
||||
<a href="https://help.getzep.com/chat-history-memory/facts?utm_source=autogen">
|
||||
<img src="img/ecosystem-zep-session.gif" style="width: 100%;" alt="Chat Session and Facts"/>
|
||||
</a>
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b><u>Implementing Fact Ratings</u></b></summary>
|
||||
<a href="https://vimeo.com/989192145">
|
||||
<img src="img/ecosystem-zep-fact-ratings.png" alt="Zep Fact Ratings" />
|
||||
</a>
|
||||
</details>
|
||||
|
||||
## How Zep works
|
||||
|
||||
1. Add chat messages or data artifacts to Zep during each user interaction or agent event.
|
||||
2. Zep intelligently integrates new information into the user's (or groups of users) Knowledge Graph, updating existing context as needed.
|
||||
3. Retrieve relevant facts from Zep for subsequent interactions or events.
|
||||
|
||||
Zep's temporal Knowledge Graph maintains contextual information about facts, enabling reasoning about state changes and providing data provenance insights. Each fact includes `valid_at` and `invalid_at` dates, allowing agents to track changes in user preferences, traits, or environment.
|
||||
|
||||
## Zep is fast
|
||||
|
||||
Retrieving facts is simple and very fast. Unlike other memory solutions, Zep does not use agents to ensure facts are relevant. It precomputes facts, entity summaries, and other artifacts asynchronously. For on-premise use, retrieval speed primarily depends on your embedding service's performance.
|
||||
|
||||
## Zep supports many types of data
|
||||
|
||||
You can add a variety of data artifacts to Zep:
|
||||
|
||||
- Adding chat history messages.
|
||||
- Ingestion of JSON and unstructured text.
|
||||
|
||||
Zep supports chat session, user, and group-level graphs. Group graphs allow for capturing organizational knowledge.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Zep Cloud
|
||||
|
||||
1. Sign up for [Zep Cloud](https://www.getzep.com?utm_source=autogen) and create a [Project API Key](https://help.getzep.com/projects?utm_source=autogen).
|
||||
|
||||
2. Install one of the [Zep Python, TypeScript or Go SDKs](https://help.getzep.com/sdks?utm_source=autogen). Python instructions shown below.
|
||||
|
||||
```shell
|
||||
pip install zep-cloud
|
||||
```
|
||||
|
||||
3. Initialize a client
|
||||
|
||||
```python
|
||||
import os
|
||||
from zep_cloud.client import AsyncZep
|
||||
|
||||
API_KEY = os.environ.get('ZEP_API_KEY')
|
||||
client = AsyncZep(
|
||||
api_key=API_KEY,
|
||||
)
|
||||
```
|
||||
|
||||
3. Review the Zep and Autogen [notebook example](/docs/notebooks/agent_memory_using_zep/) for agent-building best practices.
|
||||
|
||||
### Zep Community Edition
|
||||
|
||||
Follow the [Getting Started guide](https://help.getzep.com/ce/quickstart?utm_source=autogen) or visit the [GitHub Repo](https://github.com/getzep/zep?utm_source=autogen).
|
||||
|
||||
## Autogen + Zep examples
|
||||
|
||||
- [Autogen Agents with Zep Memory Notebook](/docs/notebooks/agent_memory_using_zep/)
|
||||
|
||||
## Extra links
|
||||
|
||||
- [📙 Documentation](https://help.getzep.com/?utm_source=autogen)
|
||||
- [🐦 Twitter / X](https://x.com/zep_ai/)
|
||||
- [📢 Discord](https://discord.com/invite/W8Kw6bsgXQ)
|
|
@ -0,0 +1,3 @@
|
|||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c0829b29a48ca05e2694aca00446ef5768c1b8edec56ce5035527f25f9ee4c81
|
||||
size 421633
|
|
@ -0,0 +1,3 @@
|
|||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:179241bd4fa3ed89d721deeb1810a31b9838e7f54582d521bd91f29cbae044f2
|
||||
size 233905
|
Binary file not shown.
After Width: | Height: | Size: 10 MiB |
|
@ -0,0 +1,775 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Kubernetes Pod Commandline Code Executor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `PodCommandLineCodeExecutor` in the `autogen.coding.kubernetes` module is designed to execute code blocks using a pod in Kubernetes.\n",
|
||||
"It functions similarly to the `DockerCommandLineCodeExecutor`, but specifically creates container within Kubernetes environments.\n",
|
||||
"\n",
|
||||
"There are two condition to use PodCommandLineCodeExecutor.\n",
|
||||
"\n",
|
||||
"- Access to a Kubernetes cluster\n",
|
||||
"- installation `autogen` with the extra requirements `'pyautogen[kubernetes]'`\n",
|
||||
"\n",
|
||||
"For local development and testing, this document uses a Minikube cluster.\n",
|
||||
"\n",
|
||||
"Minikube is a tool that allows you to run a single-node Kubernetes cluster on you local machine. \n",
|
||||
"You can refer to the link below for installation and setup of Minikube.\n",
|
||||
"\n",
|
||||
"🔗 https://minikube.sigs.k8s.io/docs/start/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Access kubernetes cluster"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"There are four options PodCommandLineCodeExecutor to access kubernetes API server.\n",
|
||||
"\n",
|
||||
"- default kubeconfig file path: `~/.kube/config`\n",
|
||||
"- Provide a custom kubeconfig file path using the `kube_config_file` argument of `PodCommandLineCodeExecutor`.\n",
|
||||
"- Set the kubeconfig file path using the `KUBECONFIG` environment variable.\n",
|
||||
"- Provide token from Kubernetes ServiceAccount with sufficient permissions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Generally, if kubeconfig file is located in `~/.kube/config`, there's no need to provide kubeconfig file path on parameter or environment variables.\n",
|
||||
"\n",
|
||||
"The tutorial of providing ServiceAccount Token is in the last section"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example\n",
|
||||
"\n",
|
||||
"In order to use kubernetes Pod based code executor, you need to install Kubernetes Python SDK.\n",
|
||||
"\n",
|
||||
"You can do this by running the following command:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install 'kubernetes>=27'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Alternatively, you can install it with the extra features for Kubernetes:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install 'autogen-agentchat[kubernetes]~=0.2'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To provide kubeconfig file path with environment variable, It can be added with `os.environ[\"KUBECONFIG\"]`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# Set the KUBECONFIG environment variable\n",
|
||||
"# if the kubeconfig file is not in the default location(~/.kube/config).\n",
|
||||
"os.environ[\"KUBECONFIG\"] = \"path/to/your/kubeconfig\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen.coding import CodeBlock\n",
|
||||
"from autogen.coding.kubernetes import PodCommandLineCodeExecutor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"exit_code=0 output='Hello, World!\\n' code_file='/workspace/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"with PodCommandLineCodeExecutor(\n",
|
||||
" namespace=\"default\",\n",
|
||||
" # kube_config_file=\"kubeconfig/file/path\" # If you have another kubeconfig file, you can add it on kube_config_file argument\n",
|
||||
") as executor:\n",
|
||||
" print(\n",
|
||||
" executor.execute_code_blocks(\n",
|
||||
" # Example of executing a simple Python code block within a Kubernetes pod.\n",
|
||||
" code_blocks=[\n",
|
||||
" CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Using a context manager(the `with` statement), the pod created by `PodCommandLineCodeExecutor` is automatically deleted after the tasks are completed.\n",
|
||||
"\n",
|
||||
"Although the pod is automatically deleted when using a context manager, you might sometimes need to delete it manually. You can do this using `stop()` method, as shown below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"executor = PodCommandLineCodeExecutor(namespace=\"default\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"NAME READY STATUS RESTARTS AGE\n",
|
||||
"autogen-code-exec-afd217ac-f77b-4ede-8c53-1297eca5ec64 1/1 Running 0 10m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# This command lists all pods in the default namespace. \n",
|
||||
"# The default pod name follows the format autogen-code-exec-{uuid.uuid4()}.\n",
|
||||
"kubectl get pod -n default"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"python:3-slim"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# This command shows container's image in the pod.\n",
|
||||
"# The default container image is python:3-slim\n",
|
||||
"kubectl get pod autogen-code-exec-afd217ac-f77b-4ede-8c53-1297eca5ec64 -o jsonpath={.spec.containers[0].image}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"executor.stop()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To use a different container image for code executor pod, specify the desired image tag using `image` argument.\n",
|
||||
"\n",
|
||||
"`PodCommandLineCodeExecutor` has a default execution policy that allows Python and shell script code blocks. You can enable other languages with `execution_policies` argument."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"exit_code=0 output='Hello, World!\\n' code_file='app/tmp_code_8c34c8586cb47943728afe1297b7a51c.js'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"with PodCommandLineCodeExecutor(\n",
|
||||
" image=\"node:22-alpine\", # Specifies the runtime environments using a container image\n",
|
||||
" namespace=\"default\",\n",
|
||||
" work_dir=\"./app\", # Directory within the container where code block files are stored\n",
|
||||
" timeout=10, # Timeout in seconds for pod creation and code block execution (default is 60 seconds)\n",
|
||||
" execution_policies={\n",
|
||||
" \"javascript\": True\n",
|
||||
" }, # Enable execution of Javascript code blocks by updating execution policies\n",
|
||||
") as executor:\n",
|
||||
" print(\n",
|
||||
" executor.execute_code_blocks(\n",
|
||||
" code_blocks=[\n",
|
||||
" CodeBlock(language=\"javascript\", code=\"console.log('Hello, World!')\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you want to apply custom settings for executor pod, such as annotations, environment variables, commands, volumes etc., \n",
|
||||
"you can provide a custom pod specification using `kubernetes.client.V1Pod` format.\n",
|
||||
"\n",
|
||||
"The `container_name` argument should also be provided because `PodCommandLineCodeExecutor` does not automatically recognize the container where code blocks will be executed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from kubernetes import client\n",
|
||||
"\n",
|
||||
"pod = client.V1Pod(\n",
|
||||
" metadata=client.V1ObjectMeta(name=\"abcd\", namespace=\"default\", annotations={\"sidecar.istio.io/inject\": \"false\"}),\n",
|
||||
" spec=client.V1PodSpec(\n",
|
||||
" restart_policy=\"Never\",\n",
|
||||
" containers=[\n",
|
||||
" client.V1Container(\n",
|
||||
" args=[\"-c\", \"while true;do sleep 5; done\"],\n",
|
||||
" command=[\"/bin/sh\"],\n",
|
||||
" name=\"abcd\", # container name where code blocks will be executed should be provided using `container_name` argument\n",
|
||||
" image=\"python:3.11-slim\",\n",
|
||||
" env=[\n",
|
||||
" client.V1EnvVar(name=\"TEST\", value=\"TEST\"),\n",
|
||||
" client.V1EnvVar(\n",
|
||||
" name=\"POD_NAME\",\n",
|
||||
" value_from=client.V1EnvVarSource(\n",
|
||||
" field_ref=client.V1ObjectFieldSelector(field_path=\"metadata.name\")\n",
|
||||
" ),\n",
|
||||
" ),\n",
|
||||
" ],\n",
|
||||
" )\n",
|
||||
" ],\n",
|
||||
" ),\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"exit_code=0 output='Hello, World!\\n' code_file='/autogen/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'\n",
|
||||
"exit_code=0 output='TEST abcd\\n' code_file='/autogen/tmp_code_202399627ea7fb8d8e816f4910b7f87b.sh'\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"with PodCommandLineCodeExecutor(\n",
|
||||
" pod_spec=pod, # custom executor pod spec\n",
|
||||
" container_name=\"abcd\", # To use custom executor pod spec, container_name where code block will be executed should be specified\n",
|
||||
" work_dir=\"/autogen\",\n",
|
||||
" timeout=60,\n",
|
||||
") as executor:\n",
|
||||
" print(\n",
|
||||
" executor.execute_code_blocks(\n",
|
||||
" code_blocks=[\n",
|
||||
" CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" )\n",
|
||||
" print(\n",
|
||||
" executor.execute_code_blocks(\n",
|
||||
" code_blocks=[\n",
|
||||
" CodeBlock(\n",
|
||||
" code=\"echo $TEST $POD_NAME\", language=\"bash\"\n",
|
||||
" ), # echo environment variables specified in pod_spec\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Integrates with AutoGen Agents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`PodCommandLineCodeExecutor` can be integrated with Agents."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen import config_list_from_json\n",
|
||||
"\n",
|
||||
"config_list = config_list_from_json(\n",
|
||||
" env_or_file=\"OAI_CONFIG_LIST\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n",
|
||||
"\n",
|
||||
"Write Python code to calculate the moves of disk on tower of hanoi with 3 disks\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mcode_writer\u001b[0m (to code_executor_agent):\n",
|
||||
"\n",
|
||||
"The problem of the Tower of Hanoi with 3 disks involves moving the disks from one peg to another, following these rules:\n",
|
||||
"1. Only one disk can be moved at a time.\n",
|
||||
"2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty peg.\n",
|
||||
"3. No disk may be placed on top of a smaller disk.\n",
|
||||
"\n",
|
||||
"In the solution, I will use a recursive function to calculate the moves and print them out. Here's the Python code to accomplish this:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"def tower_of_hanoi(n, from_rod, to_rod, aux_rod):\n",
|
||||
" if n == 1:\n",
|
||||
" print(f\"Move disk 1 from rod {from_rod} to rod {to_rod}\")\n",
|
||||
" return\n",
|
||||
" tower_of_hanoi(n-1, from_rod, aux_rod, to_rod)\n",
|
||||
" print(f\"Move disk {n} from rod {from_rod} to rod {to_rod}\")\n",
|
||||
" tower_of_hanoi(n-1, aux_rod, to_rod, from_rod)\n",
|
||||
"\n",
|
||||
"n = 3 # Number of disks\n",
|
||||
"tower_of_hanoi(n, 'A', 'C', 'B') # A, B and C are names of the rods\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"This script defines a function `tower_of_hanoi` that will print out each move necessary to solve the Tower of Hanoi problem with the specified number of disks `n`. This specific setup will solve for 3 disks moving from rod 'A' to rod 'C' with the help of rod 'B'.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[31m\n",
|
||||
">>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...\u001b[0m\n",
|
||||
"\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n",
|
||||
"\n",
|
||||
"exitcode: 0 (execution succeeded)\n",
|
||||
"Code output: Move disk 1 from rod A to rod C\n",
|
||||
"Move disk 2 from rod A to rod B\n",
|
||||
"Move disk 1 from rod C to rod B\n",
|
||||
"Move disk 3 from rod A to rod C\n",
|
||||
"Move disk 1 from rod B to rod A\n",
|
||||
"Move disk 2 from rod B to rod C\n",
|
||||
"Move disk 1 from rod A to rod C\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mcode_writer\u001b[0m (to code_executor_agent):\n",
|
||||
"\n",
|
||||
"The execution of the provided code successfully calculated and printed the moves for solving the Tower of Hanoi with 3 disks. Here are the steps it performed:\n",
|
||||
"\n",
|
||||
"1. Move disk 1 from rod A to rod C.\n",
|
||||
"2. Move disk 2 from rod A to rod B.\n",
|
||||
"3. Move disk 1 from rod C to rod B.\n",
|
||||
"4. Move disk 3 from rod A to rod C.\n",
|
||||
"5. Move disk 1 from rod B to rod A.\n",
|
||||
"6. Move disk 2 from rod B to rod C.\n",
|
||||
"7. Move disk 1 from rod A to rod C.\n",
|
||||
"\n",
|
||||
"This sequence effectively transfers all disks from rod A to rod C using rod B as an auxiliary, following the rules of the Tower of Hanoi puzzle. If you have any more tasks or need further explanation, feel free to ask!\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mcode_executor_agent\u001b[0m (to code_writer):\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen import ConversableAgent\n",
|
||||
"\n",
|
||||
"# The code writer agent's system message is to instruct the LLM on how to\n",
|
||||
"# use the code executor with python or shell script code\n",
|
||||
"code_writer_system_message = \"\"\"\n",
|
||||
"You have been given coding capability to solve tasks using Python code.\n",
|
||||
"In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\n",
|
||||
" 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\n",
|
||||
" 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\n",
|
||||
"Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\n",
|
||||
"When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\n",
|
||||
"If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\n",
|
||||
"\"\"\"\n",
|
||||
"with PodCommandLineCodeExecutor(namespace=\"default\") as executor:\n",
|
||||
"\n",
|
||||
" code_executor_agent = ConversableAgent(\n",
|
||||
" name=\"code_executor_agent\",\n",
|
||||
" llm_config=False,\n",
|
||||
" code_execution_config={\n",
|
||||
" \"executor\": executor,\n",
|
||||
" },\n",
|
||||
" human_input_mode=\"NEVER\",\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" code_writer_agent = ConversableAgent(\n",
|
||||
" \"code_writer\",\n",
|
||||
" system_message=code_writer_system_message,\n",
|
||||
" llm_config={\"config_list\": config_list},\n",
|
||||
" code_execution_config=False, # Turn off code execution for this agent.\n",
|
||||
" max_consecutive_auto_reply=2,\n",
|
||||
" human_input_mode=\"NEVER\",\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" chat_result = code_executor_agent.initiate_chat(\n",
|
||||
" code_writer_agent, message=\"Write Python code to calculate the moves of disk on tower of hanoi with 10 disks\"\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "93802984-3207-430b-a205-82f0a77df2b2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"ChatResult(chat_id=None,\n",
|
||||
" chat_history=[{'content': 'Write Python code to calculate the moves '\n",
|
||||
" 'of disk on tower of hanoi with 3 disks',\n",
|
||||
" 'name': 'code_executor_agent',\n",
|
||||
" 'role': 'assistant'},\n",
|
||||
" {'content': 'The problem of the Tower of Hanoi with 3 '\n",
|
||||
" 'disks involves moving the disks from one '\n",
|
||||
" 'peg to another, following these rules:\\n'\n",
|
||||
" '1. Only one disk can be moved at a '\n",
|
||||
" 'time.\\n'\n",
|
||||
" '2. Each move consists of taking the '\n",
|
||||
" 'upper disk from one of the stacks and '\n",
|
||||
" 'placing it on top of another stack or on '\n",
|
||||
" 'an empty peg.\\n'\n",
|
||||
" '3. No disk may be placed on top of a '\n",
|
||||
" 'smaller disk.\\n'\n",
|
||||
" '\\n'\n",
|
||||
" 'In the solution, I will use a recursive '\n",
|
||||
" 'function to calculate the moves and '\n",
|
||||
" \"print them out. Here's the Python code \"\n",
|
||||
" 'to accomplish this:\\n'\n",
|
||||
" '\\n'\n",
|
||||
" '```python\\n'\n",
|
||||
" 'def tower_of_hanoi(n, from_rod, to_rod, '\n",
|
||||
" 'aux_rod):\\n'\n",
|
||||
" ' if n == 1:\\n'\n",
|
||||
" ' print(f\"Move disk 1 from rod '\n",
|
||||
" '{from_rod} to rod {to_rod}\")\\n'\n",
|
||||
" ' return\\n'\n",
|
||||
" ' tower_of_hanoi(n-1, from_rod, '\n",
|
||||
" 'aux_rod, to_rod)\\n'\n",
|
||||
" ' print(f\"Move disk {n} from rod '\n",
|
||||
" '{from_rod} to rod {to_rod}\")\\n'\n",
|
||||
" ' tower_of_hanoi(n-1, aux_rod, to_rod, '\n",
|
||||
" 'from_rod)\\n'\n",
|
||||
" '\\n'\n",
|
||||
" 'n = 3 # Number of disks\\n'\n",
|
||||
" \"tower_of_hanoi(n, 'A', 'C', 'B') # A, B \"\n",
|
||||
" 'and C are names of the rods\\n'\n",
|
||||
" '```\\n'\n",
|
||||
" '\\n'\n",
|
||||
" 'This script defines a function '\n",
|
||||
" '`tower_of_hanoi` that will print out '\n",
|
||||
" 'each move necessary to solve the Tower '\n",
|
||||
" 'of Hanoi problem with the specified '\n",
|
||||
" 'number of disks `n`. This specific setup '\n",
|
||||
" 'will solve for 3 disks moving from rod '\n",
|
||||
" \"'A' to rod 'C' with the help of rod 'B'.\",\n",
|
||||
" 'name': 'code_writer',\n",
|
||||
" 'role': 'user'},\n",
|
||||
" {'content': 'exitcode: 0 (execution succeeded)\\n'\n",
|
||||
" 'Code output: Move disk 1 from rod A to '\n",
|
||||
" 'rod C\\n'\n",
|
||||
" 'Move disk 2 from rod A to rod B\\n'\n",
|
||||
" 'Move disk 1 from rod C to rod B\\n'\n",
|
||||
" 'Move disk 3 from rod A to rod C\\n'\n",
|
||||
" 'Move disk 1 from rod B to rod A\\n'\n",
|
||||
" 'Move disk 2 from rod B to rod C\\n'\n",
|
||||
" 'Move disk 1 from rod A to rod C\\n',\n",
|
||||
" 'name': 'code_executor_agent',\n",
|
||||
" 'role': 'assistant'},\n",
|
||||
" {'content': 'The execution of the provided code '\n",
|
||||
" 'successfully calculated and printed the '\n",
|
||||
" 'moves for solving the Tower of Hanoi '\n",
|
||||
" 'with 3 disks. Here are the steps it '\n",
|
||||
" 'performed:\\n'\n",
|
||||
" '\\n'\n",
|
||||
" '1. Move disk 1 from rod A to rod C.\\n'\n",
|
||||
" '2. Move disk 2 from rod A to rod B.\\n'\n",
|
||||
" '3. Move disk 1 from rod C to rod B.\\n'\n",
|
||||
" '4. Move disk 3 from rod A to rod C.\\n'\n",
|
||||
" '5. Move disk 1 from rod B to rod A.\\n'\n",
|
||||
" '6. Move disk 2 from rod B to rod C.\\n'\n",
|
||||
" '7. Move disk 1 from rod A to rod C.\\n'\n",
|
||||
" '\\n'\n",
|
||||
" 'This sequence effectively transfers all '\n",
|
||||
" 'disks from rod A to rod C using rod B as '\n",
|
||||
" 'an auxiliary, following the rules of the '\n",
|
||||
" 'Tower of Hanoi puzzle. If you have any '\n",
|
||||
" 'more tasks or need further explanation, '\n",
|
||||
" 'feel free to ask!',\n",
|
||||
" 'name': 'code_writer',\n",
|
||||
" 'role': 'user'},\n",
|
||||
" {'content': '',\n",
|
||||
" 'name': 'code_executor_agent',\n",
|
||||
" 'role': 'assistant'}],\n",
|
||||
" summary='',\n",
|
||||
" cost={'usage_excluding_cached_inference': {'total_cost': 0},\n",
|
||||
" 'usage_including_cached_inference': {'gpt-4-turbo-2024-04-09': {'completion_tokens': 499,\n",
|
||||
" 'cost': 0.0269,\n",
|
||||
" 'prompt_tokens': 1193,\n",
|
||||
" 'total_tokens': 1692},\n",
|
||||
" 'total_cost': 0.0269}},\n",
|
||||
" human_input=[])\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import pprint\n",
|
||||
"\n",
|
||||
"pprint.pprint(chat_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Use ServiceAccount token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If a `PodCommandLineCodeExecutor` instance runs inside of Kubernetes Pod, it can use a token generated from a ServiceAccount to access Kubernetes API server.\n",
|
||||
"\n",
|
||||
"The `PodCommandLineCodeExecutor` requires the following permissions:\n",
|
||||
"the verbs `create`, `get`, `delete` for `pods` resource, and the verb `get` for resources `pods/status`, `pods/exec`.\n",
|
||||
"\n",
|
||||
"You can create a ServiceAccount, ClusterRole and RoleBinding with `kubectl` as shown below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"serviceaccount/autogen-executor-sa created\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# Create ServiceAccount on default namespace\n",
|
||||
"kubectl create sa autogen-executor-sa"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"clusterrole.rbac.authorization.k8s.io/autogen-executor-role created\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# Create ClusterRole that has sufficient permissions\n",
|
||||
"kubectl create clusterrole autogen-executor-role \\\n",
|
||||
" --verb=get,create,delete --resource=pods,pods/status,pods/exec"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"rolebinding.rbac.authorization.k8s.io/autogen-executor-rolebinding created\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# Create RoleBinding that binds ClusterRole and ServiceAccount\n",
|
||||
"kubectl create rolebinding autogen-executor-rolebinding \\\n",
|
||||
" --clusterrole autogen-executor-role --serviceaccount default:autogen-executor-sa"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A pod with a previously created ServiceAccount can be launched using the following command."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"pod/autogen-executor created\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"# create pod with serviceaccount\n",
|
||||
"kubectl run autogen-executor --image python:3 \\\n",
|
||||
" --overrides='{\"spec\":{\"serviceAccount\": \"autogen-executor-sa\"}}' \\\n",
|
||||
" -- bash -c 'pip install pyautogen[kubernetes] && sleep inifinity'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can execute `PodCommandLineCodeExecutor` inside the Python interpreter process from `autogen-executor` Pod.\n",
|
||||
"\n",
|
||||
"It creates new pod for code execution using token generated from `autogen-executor-sa` ServiceAccount."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%bash\n",
|
||||
"kubectl exec autogen-executor -it -- python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"kube_config_path not provided and default location (~/.kube/config) does not exist. Using inCluster Config. This might not work.\n",
|
||||
"exit_code=0 output='Hello, World!\\n' code_file='/workspace/tmp_code_07da107bb575cc4e02b0e1d6d99cc204.py'"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen.coding import CodeBlock\n",
|
||||
"from autogen.coding.kubernetes import PodCommandLineCodeExecutor\n",
|
||||
"\n",
|
||||
"# PodCommandLineCodeExecutor uses token generated from ServiceAccount by kubernetes incluster config\n",
|
||||
"with PodCommandLineCodeExecutor() as executor:\n",
|
||||
" print(\n",
|
||||
" executor.execute_code_blocks(\n",
|
||||
" code_blocks=[\n",
|
||||
" CodeBlock(language=\"python\", code=\"print('Hello, World!')\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" )"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "autogen",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -127,6 +127,7 @@ For more detailed examples and notebooks showcasing the usage of retrieval augme
|
|||
- Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb)
|
||||
- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb)
|
||||
- Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb)
|
||||
- Automated Code Generation and Question Answering with [Couchbase](https://www.couchbase.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_couchbase.ipynb)
|
||||
- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb)
|
||||
- **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG)
|
||||
|
||||
|
|
Loading…
Reference in New Issue