mirror of https://github.com/microsoft/autogen.git
Update Docs; Update examples to allow Azure OpenAI setup (#154)
* Update Docs; Update examples to allow Azure OpenAI setup * update
This commit is contained in:
parent
4093102e51
commit
f42361f57d
|
@ -23,7 +23,9 @@ hatch run check
|
|||
|
||||
### Virtual environment
|
||||
|
||||
To get a shell with the package available (virtual environment) run:
|
||||
To get a shell with the package available (virtual environment),
|
||||
in the current directory,
|
||||
run:
|
||||
|
||||
```sh
|
||||
hatch shell
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 100 KiB |
|
@ -1,6 +1,8 @@
|
|||
# Agent Components
|
||||
# AI Agents
|
||||
|
||||
AGNext provides a suite of components to help developers build agents.
|
||||
AGNext provides a suite of components to help developers build AI agents.
|
||||
This section is still under construction.
|
||||
The best place to start is the [examples](https://github.com/microsoft/agnext/tree/main/python/examples).
|
||||
|
||||
## Type-Routed Agent
|
||||
|
|
@ -1,57 +1,19 @@
|
|||
# Core Concepts
|
||||
# Foundation
|
||||
|
||||
## What is Multi-Agent Application?
|
||||
|
||||
A wide variety of software applications can be modeled as a collection of independent
|
||||
agents that communicate with each other through messages:
|
||||
sensors on a factory floor,
|
||||
distributed services powering web applications,
|
||||
business workflows involving multiple stakeholders,
|
||||
and more recently, generative artificial intelligence (AI) models (e.g., GPT-4) that can write code and interact with
|
||||
other software systems.
|
||||
We refer to them as multi-agent applications.
|
||||
|
||||
In a multi-agent application, agents can live in the same process, on the same machine,
|
||||
or on different machines and across organizational boundaries.
|
||||
They can be implemented using different AI models, instructions, and programming languages.
|
||||
They can collaborate and work toward a common goal.
|
||||
|
||||
Each agent is a self-contained unit:
|
||||
developers can build, test and deploy it independently, and reuse it for different scenarios.
|
||||
Agents are composable: simple agents can form complex applications.
|
||||
|
||||
## AGNext Overview
|
||||
|
||||
AGNext is a framework for building multi-agent applications.
|
||||
It provides a runtime envionment to facilitate communication between agents,
|
||||
manage their identities and lifecycles, and enforce boundaries.
|
||||
It also provides a set of common patterns and components to help developers build
|
||||
AI agents that can work together.
|
||||
|
||||
AGNext is designed to be unopinionated and extensible.
|
||||
It does not prescribe an abstraction for agents or messages, rather, it provides
|
||||
a minimal base layer that can be extended to suit the application's needs.
|
||||
Developers can build agents quickly by using the provided components including
|
||||
type-routed agent, AI model clients, tools for AI models, code execution sandboxes,
|
||||
memory stores, and more.
|
||||
Developers can also make use of the provided multi-agent patterns to build
|
||||
orchestrated workflows, group chat systems, and more.
|
||||
|
||||
The API consists of the following modules:
|
||||
|
||||
- {py:mod}`agnext.core` - The core interfaces that defines agent and runtime.
|
||||
- {py:mod}`agnext.application` - Implementations of the runtime and other modules (e.g., logging) for building applications.
|
||||
- {py:mod}`agnext.components` - Independent agent-building components: agents, models, memory, and tools.
|
||||
In this section, we focus on the core concepts of AGNext:
|
||||
agents, agent runtime, messages, and communication.
|
||||
You will not find any AI models or tools here, just the foundational
|
||||
building blocks for building multi-agent applications.
|
||||
|
||||
## Agent and Agent Runtime
|
||||
|
||||
An agent in AGNext is an entity that can react to, send, and publish
|
||||
messages. Messages are the only means through which agents can communicate
|
||||
An agent in AGNext can react to, send, and publish messages.
|
||||
Messages are the only means through which agents can communicate
|
||||
with each other.
|
||||
|
||||
An agent runtime is the execution environment for agents in AGNext.
|
||||
Similar to the runtime environment of a programming language, the
|
||||
agent runtime provides the necessary infrastructure to facilitate communication
|
||||
Similar to the runtime environment of a programming language,
|
||||
an agent runtime provides the necessary infrastructure to facilitate communication
|
||||
between agents, manage agent lifecycles, enforce security boundaries, and support monitoring and
|
||||
debugging.
|
||||
For local development, developers can use {py:class}`~agnext.application.SingleThreadedAgentRuntime`,
|
||||
|
@ -147,8 +109,8 @@ Other runtime implementations will have their own way of running the runtime.
|
|||
Agents communicate with each other via messages.
|
||||
Messages are serializable objects, they can be defined using:
|
||||
|
||||
- A subclass of Pydantic's {py:class}`pydantic.BaseModel`, or
|
||||
- A dataclass
|
||||
- A subclass of Pydantic's {py:class}`pydantic.BaseModel`,
|
||||
- A dataclass, or
|
||||
- A built-in serializable Python type (e.g., `str`).
|
||||
|
||||
For example:
|
||||
|
@ -177,8 +139,10 @@ When an agent receives a message the runtime will invoke the agent's message han
|
|||
({py:meth}`~agnext.core.Agent.on_message`) which should implement the agents message handling logic.
|
||||
If this message cannot be handled by the agent, the agent should raise a
|
||||
{py:class}`~agnext.core.exceptions.CantHandleException`.
|
||||
|
||||
For convenience, the {py:class}`~agnext.components.TypeRoutedAgent` base class
|
||||
provides a simple API for associating message types with message handlers,
|
||||
provides the {py:meth}`~agnext.components.message_handler` decorator
|
||||
for associating message types with message handlers,
|
||||
so developers do not need to implement the {py:meth}`~agnext.core.Agent.on_message` method.
|
||||
|
||||
For example, the following type-routed agent responds to `TextMessage` and `ImageMessage`
|
||||
|
@ -221,14 +185,12 @@ There are two types of communication in AGNext:
|
|||
To send a direct message to another agent, within a message handler use
|
||||
the {py:meth}`agnext.core.BaseAgent.send_message` method,
|
||||
from the runtime use the {py:meth}`agnext.core.AgentRuntime.send_message` method.
|
||||
|
||||
Awaiting this method call will return the a `Future[T]` object where `T` is the type
|
||||
of response of the invoked agent.
|
||||
The future object can be awaited to get the actual response.
|
||||
Awaiting calls to these methods will return the return value of the
|
||||
receiving agent's message handler.
|
||||
|
||||
```{note}
|
||||
If the invoked agent raises an exception while the sender is awaiting on
|
||||
the future, the exception will be propagated back to the sender.
|
||||
If the invoked agent raises an exception while the sender is awaiting,
|
||||
the exception will be propagated back to the sender.
|
||||
```
|
||||
|
||||
#### Request/Response
|
||||
|
@ -258,10 +220,8 @@ class OuterAgent(TypeRoutedAgent):
|
|||
@message_handler
|
||||
async def on_str_message(self, message: str, cancellation_token: CancellationToken) -> None:
|
||||
print(f"Received message: {message}")
|
||||
# Send a direct message to the inner agent and receves a response future.
|
||||
response_future = await self.send_message(f"Hello from outer, {message}", self.inner_agent_id)
|
||||
# Wait for the response to be ready.
|
||||
response = await response_future
|
||||
# Send a direct message to the inner agent and receves a response.
|
||||
response = await self.send_message(f"Hello from outer, {message}", self.inner_agent_id)
|
||||
print(f"Received inner response: {response}")
|
||||
|
||||
async def main() -> None:
|
||||
|
@ -289,54 +249,19 @@ To get the response after sending a message, the sender must await on the
|
|||
response future. So you can also write `response = await await self.send_message(...)`.
|
||||
```
|
||||
|
||||
#### Send, No Reply
|
||||
#### Command/Notification
|
||||
|
||||
In many scenarios, the sender does not need a response from the receiver.
|
||||
In this case, the sender does not need to await on the response future,
|
||||
In many scenarios, an agent can commanded another agent to perform an action,
|
||||
or notify another agent of an event. In this case,
|
||||
the sender does not need a response from the receiver -- it is a command or notification,
|
||||
and the receiver does not need to return a value from the message handler.
|
||||
In the following example, the `InnerAgent` does not return a value,
|
||||
and the `OuterAgent` does not await on the response future:
|
||||
For example, the `InnerAgent` can be modified to just print the message it receives:
|
||||
|
||||
```python
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.core import CancellationToken, AgentId
|
||||
|
||||
class InnerAgent(TypeRoutedAgent):
|
||||
@message_handler
|
||||
async def on_str_message(self, message: str, cancellation_token: CancellationToken) -> None:
|
||||
# Just print the message.
|
||||
print(f"Hello from inner, {message}")
|
||||
|
||||
class OuterAgent(TypeRoutedAgent):
|
||||
def __init__(self, description: str, inner_agent_id: AgentId):
|
||||
super().__init__(description)
|
||||
self.inner_agent_id = inner_agent_id
|
||||
|
||||
@message_handler
|
||||
async def on_str_message(self, message: str, cancellation_token: CancellationToken) -> None:
|
||||
print(f"Received message: {message}")
|
||||
# Send a direct message to the inner agent and move on.
|
||||
await self.send_message(f"Hello from outer, {message}", self.inner_agent_id)
|
||||
# No need to wait for the response, just do other things.
|
||||
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
inner = runtime.register_and_get("inner_agent", lambda: InnerAgent("InnerAgent"))
|
||||
outer = runtime.register_and_get("outer_agent", lambda: OuterAgent("OuterAgent", inner))
|
||||
await runtime.send_message("Hello, World!", outer)
|
||||
await runtime.process_until_idle()
|
||||
|
||||
import asyncio
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
In the above example, the `OuterAgent` sends a direct string message to the `InnerAgent`
|
||||
but does not await on the response future. The following output will be produced:
|
||||
|
||||
```text
|
||||
Received message: Hello, World!
|
||||
Hello from inner, Hello from outer, Hello, World!
|
||||
```
|
||||
|
||||
### Broadcast Communication
|
|
@ -0,0 +1,68 @@
|
|||
# Overview
|
||||
|
||||
This section provides the background and overview of AGNext.
|
||||
|
||||
## Multi-Agent Application
|
||||
|
||||
A wide variety of software applications can be modeled as a collection of independent
|
||||
agents that communicate with each other through messages:
|
||||
sensors on a factory floor,
|
||||
distributed services powering web applications,
|
||||
business workflows involving multiple stakeholders,
|
||||
and more recently, artificial intelligence (AI) agents powered by language models
|
||||
(e.g., GPT-4) that can write code and interact with
|
||||
other software systems.
|
||||
We refer to them as multi-agent applications.
|
||||
|
||||
In a multi-agent application, agents can live in the same process, on the same machine,
|
||||
or on different machines and across organizational boundaries.
|
||||
They can be implemented using different AI models, instructions, and programming languages.
|
||||
They can collaborate and work toward a common goal.
|
||||
|
||||
Each agent is a self-contained unit:
|
||||
developers can build, test and deploy it independently, and reuse it for different scenarios.
|
||||
Agents are composable: simple agents can form complex applications.
|
||||
|
||||
## AGNext Overview
|
||||
|
||||
AGNext is a framework for building multi-agent applications with AI agents.
|
||||
It provides a runtime envionment to facilitate communication between agents,
|
||||
manage their identities and lifecycles, and enforce boundaries.
|
||||
It also provides a set of common patterns and components to help developers build
|
||||
AI agents that can work together.
|
||||
|
||||
AGNext is designed to be unopinionated and extensible.
|
||||
It does not prescribe an abstraction for agents or messages, rather, it provides
|
||||
a minimal base layer that can be extended to suit the application's needs.
|
||||
Developers can build agents quickly by using the provided components including
|
||||
type-routed agent, AI model clients, tools for AI models, code execution sandboxes,
|
||||
memory stores, and more.
|
||||
Developers can also make use of the provided multi-agent patterns to build
|
||||
orchestrated workflows, group chat systems, and more.
|
||||
|
||||
The API consists of the following layers:
|
||||
|
||||
- {py:mod}`agnext.core`
|
||||
- {py:mod}`agnext.application`
|
||||
- {py:mod}`agnext.components`
|
||||
|
||||
The following diagram shows the relationship between the layers.
|
||||
|
||||
![AGNext Layers](agnext-layers.svg)
|
||||
|
||||
The {py:mod}`agnext.core` layer defines the
|
||||
core interfaces and base classes for agents, messages, and runtime.
|
||||
This layer is the foundation of the framework and is used by the other layers.
|
||||
|
||||
The {py:mod}`agnext.application` layer provides concrete implementations of
|
||||
runtime and utilities like logging for building multi-agent applications.
|
||||
|
||||
The {py:mod}`agnext.components` layer provides reusable components for building
|
||||
AI agents, including type-routed agents, AI model clients, tools for AI models,
|
||||
code execution sandboxes, and memory stores.
|
||||
|
||||
The layers are loosely coupled and can be used independently. For example,
|
||||
you can swap out the runtime in the {py:mod}`agnext.application` layer with your own
|
||||
runtime implementation.
|
||||
You can also skip the components in the {py:mod}`agnext.components` layer and
|
||||
build your own components.
|
|
@ -2,7 +2,22 @@
|
|||
|
||||
The repo is private, so the installation process is a bit more involved than usual.
|
||||
|
||||
## Option 1: Install from GitHub
|
||||
## Option 1: Install from a local clone
|
||||
|
||||
Make a clone of the repo:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/microsoft/agnext.git
|
||||
```
|
||||
|
||||
You can install the package by running:
|
||||
|
||||
```sh
|
||||
cd agnext/python
|
||||
pip install .
|
||||
```
|
||||
|
||||
## Option 2: Install from GitHub
|
||||
|
||||
To install the package from GitHub, you will need to authenticate with GitHub.
|
||||
|
||||
|
@ -20,11 +35,3 @@ If you don't have the `gh` CLI installed, you can generate a personal access tok
|
|||
3. Set `Repository Access` to `Only select repositories` and select `Microsoft/agnext`
|
||||
4. Set `Permissions` to `Repository permissions` and select `Contents: Read`
|
||||
5. Use the generated token for `GITHUB_TOKEN` in the commad above
|
||||
|
||||
## Option 2: Install from a local copy
|
||||
|
||||
With a copy of the repo cloned locally, you can install the package by running the following command from the root of the repo:
|
||||
|
||||
```sh
|
||||
pip install .
|
||||
```
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
AGNext
|
||||
------
|
||||
|
||||
AGNext is a framework for building multi-agent applications.
|
||||
AGNext is a framework for building multi-agent applications with AI agents.
|
||||
|
||||
At a high level, it provides a framework for inter-agent communication and a
|
||||
suite of independent components for building and managing agents.
|
||||
|
@ -9,31 +9,27 @@ You can implement agents in
|
|||
different programming languages and run them on different machines across organizational boundaries.
|
||||
You can also implement agents using other agent frameworks and run them in AGNext.
|
||||
|
||||
Please read :doc:`Core Concepts <getting-started/core-concepts>` for
|
||||
a detailed overview of AGNext's architecture and design.
|
||||
|
||||
AGNext's API consists of the following modules:
|
||||
|
||||
- :doc:`core <reference/agnext.core>` - The core interfaces that defines agent and runtime.
|
||||
- :doc:`application <reference/agnext.application>` - Implementations of the runtime and other modules (e.g., logging) for building applications.
|
||||
- :doc:`components <reference/agnext.components>` - Independent agent-building components: agents, models, memory, and tools.
|
||||
|
||||
To get you started quickly, we also offers
|
||||
`a suite of examples <https://github.com/microsoft/agnext/tree/main/python/examples>`_
|
||||
that demonstrate how to use AGNext.
|
||||
To get you started quickly, we offers
|
||||
`a suite of examples <https://github.com/microsoft/agnext/tree/main/python/examples>`_.
|
||||
|
||||
.. toctree::
|
||||
:caption: Getting started
|
||||
:hidden:
|
||||
|
||||
getting-started/installation
|
||||
getting-started/core-concepts
|
||||
|
||||
.. toctree::
|
||||
:caption: Core Concepts
|
||||
:hidden:
|
||||
|
||||
core-concepts/overview
|
||||
core-concepts/foundation
|
||||
core-concepts/ai-agents
|
||||
|
||||
.. toctree::
|
||||
:caption: Guides
|
||||
:hidden:
|
||||
|
||||
guides/components
|
||||
guides/patterns
|
||||
guides/logging
|
||||
guides/worker-protocol
|
||||
|
|
|
@ -8,6 +8,8 @@ This directory contains examples and demos of how to use AGNext.
|
|||
- `patterns`: Contains examples that illustrate how multi-agent patterns can be implemented in AGNext.
|
||||
- `demos`: Contains interactive demos that showcase applications that can be built using AGNext.
|
||||
|
||||
See [Running the examples](#running-the-examples) for instructions on how to run the examples.
|
||||
|
||||
## Core examples
|
||||
|
||||
We provide examples to illustrate the core concepts of AGNext: agents, runtime, and message passing.
|
||||
|
@ -28,13 +30,11 @@ We provide examples to illustrate how to use tools in AGNext:
|
|||
|
||||
We provide examples to illustrate how multi-agent patterns can be implemented in AGNext:
|
||||
|
||||
- [`coder_executor_pub_sub.py`](patterns/coder_executor_pub_sub.py): An example of how to create a coder-executor reflection pattern using broadcast communication. This example creates a plot of stock prices using the Yahoo Finance API.
|
||||
- [`coder_reviewer_direct.py`](patterns/coder_reviewer_direct.py): An example of how to create a coder-reviewer reflection pattern using direct communication.
|
||||
- [`coder_reviewer_pub_sub.py`](patterns/coder_reviewer_pub_sub.py): An example of how to create a coder-reviewer reflection pattern using broadcast communication.
|
||||
- [`group_chat_pub_sub.py`](patterns/group_chat_pub_sub.py): An example of how to create a round-robin group chat among three agents using broadcast communication.
|
||||
- [`mixture_of_agents_direct.py`](patterns/mixture_of_agents_direct.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa) using direct communication.
|
||||
- [`mixture_of_agents_pub_sub.py`](patterns/mixture_of_agents_pub_sub.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa) using broadcast communication.
|
||||
- [`multi_agent_debate_pub_sub.py`](patterns/multi_agent_debate_pub_sub.py): An example of how to create a [sparse multi-agent debate](https://arxiv.org/abs/2406.11776) pattern using broadcast communication.
|
||||
- [`coder_executor.py`](patterns/coder_executor.py): An example of how to create a coder-executor reflection pattern. This example creates a plot of stock prices using the Yahoo Finance API.
|
||||
- [`coder_reviewer.py`](patterns/coder_reviewer.py): An example of how to create a coder-reviewer reflection pattern.
|
||||
- [`group_chat.py`](patterns/group_chat.py): An example of how to create a round-robin group chat among three agents.
|
||||
- [`mixture_of_agents.py`](patterns/mixture_of_agents.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa).
|
||||
- [`multi_agent_debate.py`](patterns/multi_agent_debate.py): An example of how to create a [sparse multi-agent debate](https://arxiv.org/abs/2406.11776) pattern.
|
||||
|
||||
## Demos
|
||||
|
||||
|
@ -50,14 +50,40 @@ We provide interactive demos that showcase applications that can be built using
|
|||
the group chat pattern.
|
||||
- [`chest_game.py`](demos/chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use.
|
||||
|
||||
## Running the examples and demos
|
||||
## Running the examples
|
||||
|
||||
First, you need a shell with AGNext and the examples dependencies installed. To do this, run:
|
||||
### Prerequisites
|
||||
|
||||
First, you need a shell with AGNext and the examples dependencies installed.
|
||||
To do this, in the example directory, run:
|
||||
|
||||
```bash
|
||||
hatch shell
|
||||
```
|
||||
|
||||
Then, you need to set the `OPENAI_API_KEY` environment variable to your OpenAI API key.
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=your_openai_api_key
|
||||
```
|
||||
|
||||
For Azure OpenAI API, you need to set the following environment variables:
|
||||
|
||||
```bash
|
||||
export AZURE_OPENAI_API_KEY=your_azure_openai_api_key
|
||||
export AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
|
||||
```
|
||||
|
||||
By default, OpenAI API is used.
|
||||
To use Azure OpenAI API, set the `OPENAI_API_TYPE`
|
||||
environment variable to `azure`.
|
||||
|
||||
```bash
|
||||
export OPENAI_API_TYPE=azure
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
To run an example, just run the corresponding Python script. For example:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -1,10 +1,14 @@
|
|||
from typing import List, Optional, Union
|
||||
import os
|
||||
from typing import Any, List, Optional, Union
|
||||
|
||||
from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
AzureOpenAIChatCompletionClient,
|
||||
ChatCompletionClient,
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
UserMessage,
|
||||
)
|
||||
from typing_extensions import Literal
|
||||
|
@ -96,3 +100,28 @@ def convert_messages_to_llm_messages(
|
|||
raise AssertionError("unreachable")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_chat_completion_client_from_envs(**kwargs: Any) -> ChatCompletionClient:
|
||||
# Check API type.
|
||||
api_type = os.getenv("OPENAI_API_TYPE", "openai")
|
||||
if api_type == "openai":
|
||||
# Check API key.
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
if api_key is None:
|
||||
raise ValueError("OPENAI_API_KEY is not set")
|
||||
kwargs["api_key"] = api_key
|
||||
return OpenAIChatCompletionClient(**kwargs)
|
||||
elif api_type == "azure":
|
||||
# Check Azure API key.
|
||||
azure_api_key = os.getenv("AZURE_OPENAI_API_KEY")
|
||||
if azure_api_key is None:
|
||||
raise ValueError("AZURE_OPENAI_API_KEY is not set")
|
||||
kwargs["api_key"] = azure_api_key
|
||||
# Check Azure API endpoint.
|
||||
azure_api_endpoint = os.getenv("AZURE_OPENAI_API_ENDPOINT")
|
||||
if azure_api_endpoint is None:
|
||||
raise ValueError("AZURE_OPENAI_API_ENDPOINT is not set")
|
||||
kwargs["azure_endpoint"] = azure_api_endpoint
|
||||
return AzureOpenAIChatCompletionClient(**kwargs) # type: ignore
|
||||
raise ValueError(f"Unknown API type: {api_type}")
|
||||
|
|
|
@ -6,18 +6,23 @@ chat completion model, and returns the response to the main function.
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import (
|
||||
ChatCompletionClient,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
|
@ -41,7 +46,8 @@ class ChatCompletionAgent(TypeRoutedAgent):
|
|||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
agent = runtime.register_and_get(
|
||||
"chat_agent", lambda: ChatCompletionAgent("Chat agent", OpenAIChatCompletionClient(model="gpt-3.5-turbo"))
|
||||
"chat_agent",
|
||||
lambda: ChatCompletionAgent("Chat agent", get_chat_completion_client_from_envs(model="gpt-3.5-turbo")),
|
||||
)
|
||||
|
||||
# Send a message to the agent.
|
||||
|
|
|
@ -11,6 +11,8 @@ and publishes the response.
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
|
@ -20,12 +22,15 @@ from agnext.components.models import (
|
|||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
|
@ -76,7 +81,7 @@ async def main() -> None:
|
|||
"Jack",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="Jack a comedian",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
system_messages=[
|
||||
SystemMessage("You are a comedian likes to make jokes. " "When you are done talking, say 'TERMINATE'.")
|
||||
],
|
||||
|
@ -87,7 +92,7 @@ async def main() -> None:
|
|||
"Cathy",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="Cathy a poet",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
system_messages=[
|
||||
SystemMessage("You are a poet likes to write poems. " "When you are done talking, say 'TERMINATE'.")
|
||||
],
|
||||
|
|
|
@ -8,7 +8,7 @@ import sys
|
|||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import ChatCompletionClient, SystemMessage
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
|
@ -16,7 +16,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
|||
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.types import Message, TextMessage
|
||||
from common.utils import convert_messages_to_llm_messages
|
||||
from common.utils import convert_messages_to_llm_messages, get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
@ -102,7 +102,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
|||
description="Alice in the chat room.",
|
||||
background_story="Alice is a software engineer who loves to code.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
bob = runtime.register_and_get_proxy(
|
||||
|
@ -112,7 +112,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
|||
description="Bob in the chat room.",
|
||||
background_story="Bob is a data scientist who loves to analyze data.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
charlie = runtime.register_and_get_proxy(
|
||||
|
@ -122,7 +122,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
|||
description="Charlie in the chat room.",
|
||||
background_story="Charlie is a designer who loves to create art.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
app.welcoming_notice = f"""Welcome to the chat room demo with the following participants:
|
||||
|
|
|
@ -10,7 +10,7 @@ import sys
|
|||
from typing import Annotated, Literal
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
from chess import BLACK, SQUARE_NAMES, WHITE, Board, Move
|
||||
|
@ -22,6 +22,7 @@ from common.agents._chat_completion_agent import ChatCompletionAgent
|
|||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.types import TextMessage
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
def validate_turn(board: Board, player: Literal["white", "black"]) -> None:
|
||||
|
@ -168,7 +169,7 @@ def chess_game(runtime: AgentRuntime) -> None: # type: ignore
|
|||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
tools=black_tools,
|
||||
),
|
||||
)
|
||||
|
@ -185,7 +186,7 @@ def chess_game(runtime: AgentRuntime) -> None: # type: ignore
|
|||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
tools=white_tools,
|
||||
),
|
||||
)
|
||||
|
|
|
@ -6,7 +6,7 @@ import sys
|
|||
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.core import AgentRuntime
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
|
@ -15,6 +15,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
|||
from common.agents import ChatCompletionAgent, ImageGenerationAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
@ -42,7 +43,7 @@ def illustrator_critics(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
|||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo", max_tokens=500),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo", max_tokens=500),
|
||||
),
|
||||
)
|
||||
illustrator = runtime.register_and_get_proxy(
|
||||
|
@ -70,7 +71,7 @@ def illustrator_critics(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
|||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=2),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
|
|
|
@ -17,7 +17,7 @@ import aiofiles
|
|||
import aiohttp
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
from markdownify import markdownify # type: ignore
|
||||
|
@ -30,6 +30,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
|||
from common.agents import ChatCompletionAgent
|
||||
from common.memory import HeadAndTailChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
@ -127,7 +128,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
|||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
|
@ -167,7 +168,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
|||
"Be VERY concise."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
|
@ -195,7 +196,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
|||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
|
@ -227,7 +228,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
|||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
|
@ -244,7 +245,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
|||
lambda: GroupChatManager(
|
||||
description="A group chat manager.",
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
participants=[developer, product_manager, ux_designer, illustrator, user_agent],
|
||||
),
|
||||
)
|
||||
|
|
|
@ -13,7 +13,9 @@ otherwise, it generates a new code block and publishes a code execution task mes
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
@ -25,12 +27,15 @@ from agnext.components.models import (
|
|||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskMessage:
|
||||
|
@ -175,7 +180,7 @@ async def main(task: str, temp_dir: str) -> None:
|
|||
runtime = SingleThreadedAgentRuntime()
|
||||
|
||||
# Register the agents.
|
||||
runtime.register("coder", lambda: Coder(model_client=OpenAIChatCompletionClient(model="gpt-4-turbo")))
|
||||
runtime.register("coder", lambda: Coder(model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo")))
|
||||
runtime.register("executor", lambda: Executor(executor=LocalCommandLineCodeExecutor(work_dir=temp_dir)))
|
||||
|
||||
# Publish the task message.
|
|
@ -13,7 +13,9 @@ a new code block and publishes a code review task message.
|
|||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List, Union
|
||||
|
@ -24,12 +26,15 @@ from agnext.components.models import (
|
|||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingTask:
|
||||
|
@ -250,14 +255,14 @@ async def main() -> None:
|
|||
"ReviewerAgent",
|
||||
lambda: ReviewerAgent(
|
||||
description="Code Reviewer",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
"CoderAgent",
|
||||
lambda: CoderAgent(
|
||||
description="Coder",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
await runtime.publish_message(
|
|
@ -1,249 +0,0 @@
|
|||
"""
|
||||
This example shows how to use direct messaging to implement
|
||||
a simple interaction between a coder and a reviewer agent.
|
||||
1. The coder agent receives a code writing task message, generates a code block,
|
||||
and sends a code review task message to the reviewer agent.
|
||||
2. The reviewer agent receives the code review task message, reviews the code block,
|
||||
and sends a code review result message to the coder agent.
|
||||
3. The coder agent receives the code review result message, depending on the result:
|
||||
if the code is approved, it sends a code writing result message; otherwise, it generates
|
||||
a new code block and sends a code review task message.
|
||||
4. The process continues until the coder agent receives an approved code review result message.
|
||||
5. The main function prints the code writing result.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Union
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingResult:
|
||||
task: str
|
||||
code: str
|
||||
review: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeReviewTask:
|
||||
code_writing_task: str
|
||||
code_writing_scratchpad: str
|
||||
code: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeReviewResult:
|
||||
review: str
|
||||
approved: bool
|
||||
|
||||
|
||||
class ReviewerAgent(TypeRoutedAgent):
|
||||
"""An agent that performs code review tasks."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
model_client: ChatCompletionClient,
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = [
|
||||
SystemMessage(
|
||||
content="""You are a code reviewer. You focus on correctness, efficiency and safety of the code.
|
||||
Respond using the following JSON format:
|
||||
{
|
||||
"correctness": "<Your comments>",
|
||||
"efficiency": "<Your comments>",
|
||||
"safety": "<Your comments>",
|
||||
"approval": "<APPROVE or REVISE>",
|
||||
"suggested_changes": "<Your comments>"
|
||||
}
|
||||
""",
|
||||
)
|
||||
]
|
||||
self._model_client = model_client
|
||||
|
||||
@message_handler
|
||||
async def handle_code_review_task(
|
||||
self, message: CodeReviewTask, cancellation_token: CancellationToken
|
||||
) -> CodeReviewResult:
|
||||
# Format the prompt for the code review.
|
||||
prompt = f"""The problem statement is: {message.code_writing_task}
|
||||
The code is:
|
||||
```
|
||||
{message.code}
|
||||
```
|
||||
Please review the code and provide feedback.
|
||||
"""
|
||||
# Generate a response using the chat completion API.
|
||||
response = await self._model_client.create(
|
||||
self._system_messages + [UserMessage(content=prompt, source=self.metadata["name"])]
|
||||
)
|
||||
assert isinstance(response.content, str)
|
||||
# TODO: use structured generation library e.g. guidance to ensure the response is in the expected format.
|
||||
# Parse the response JSON.
|
||||
review = json.loads(response.content)
|
||||
# Construct the review text.
|
||||
review_text = "Code review:\n" + "\n".join([f"{k}: {v}" for k, v in review.items()])
|
||||
approved = review["approval"].lower().strip() == "approve"
|
||||
# Return the review result.
|
||||
return CodeReviewResult(
|
||||
review=review_text,
|
||||
approved=approved,
|
||||
)
|
||||
|
||||
|
||||
class CoderAgent(TypeRoutedAgent):
|
||||
"""An agent that performs code writing tasks."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
model_client: ChatCompletionClient,
|
||||
reviewer: AgentId,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
description,
|
||||
)
|
||||
self._system_messages = [
|
||||
SystemMessage(
|
||||
content="""You are a proficient coder. You write code to solve problems.
|
||||
Work with the reviewer to improve your code.
|
||||
Always put all finished code in a single Markdown code block.
|
||||
For example:
|
||||
```python
|
||||
def hello_world():
|
||||
print("Hello, World!")
|
||||
```
|
||||
|
||||
Respond using the following format:
|
||||
|
||||
Thoughts: <Your comments>
|
||||
Code: <Your code>
|
||||
""",
|
||||
)
|
||||
]
|
||||
self._model_client = model_client
|
||||
self._reviewer = reviewer
|
||||
|
||||
@message_handler
|
||||
async def handle_code_writing_task(
|
||||
self,
|
||||
message: CodeWritingTask,
|
||||
cancellation_token: CancellationToken,
|
||||
) -> CodeWritingResult:
|
||||
# Store the messages in a temporary memory for this request only.
|
||||
memory: List[CodeWritingTask | CodeReviewTask | CodeReviewResult] = []
|
||||
memory.append(message)
|
||||
# Keep generating responses until the code is approved.
|
||||
while not (isinstance(memory[-1], CodeReviewResult) and memory[-1].approved):
|
||||
# Create a list of LLM messages to send to the model.
|
||||
messages: List[LLMMessage] = [*self._system_messages]
|
||||
for m in memory:
|
||||
if isinstance(m, CodeReviewResult):
|
||||
messages.append(UserMessage(content=m.review, source="Reviewer"))
|
||||
elif isinstance(m, CodeReviewTask):
|
||||
messages.append(AssistantMessage(content=m.code_writing_scratchpad, source="Coder"))
|
||||
elif isinstance(m, CodeWritingTask):
|
||||
messages.append(UserMessage(content=m.task, source="User"))
|
||||
else:
|
||||
raise ValueError(f"Unexpected message type: {m}")
|
||||
# Generate a revision using the chat completion API.
|
||||
response = await self._model_client.create(messages)
|
||||
assert isinstance(response.content, str)
|
||||
# Extract the code block from the response.
|
||||
code_block = self._extract_code_block(response.content)
|
||||
if code_block is None:
|
||||
raise ValueError("Code block not found.")
|
||||
# Create a code review task.
|
||||
code_review_task = CodeReviewTask(
|
||||
code_writing_task=message.task,
|
||||
code_writing_scratchpad=response.content,
|
||||
code=code_block,
|
||||
)
|
||||
# Store the code review task in the session memory.
|
||||
memory.append(code_review_task)
|
||||
# Send the code review task to the reviewer.
|
||||
result = await self.send_message(code_review_task, self._reviewer)
|
||||
# Store the review result in the session memory.
|
||||
memory.append(await result)
|
||||
# Obtain the request from previous messages.
|
||||
review_request = next(m for m in reversed(memory) if isinstance(m, CodeReviewTask))
|
||||
assert review_request is not None
|
||||
# Publish the code writing result.
|
||||
return CodeWritingResult(
|
||||
task=message.task,
|
||||
code=review_request.code,
|
||||
review=memory[-1].review,
|
||||
)
|
||||
|
||||
def _extract_code_block(self, markdown_text: str) -> Union[str, None]:
|
||||
pattern = r"```(\w+)\n(.*?)\n```"
|
||||
# Search for the pattern in the markdown text
|
||||
match = re.search(pattern, markdown_text, re.DOTALL)
|
||||
# Extract the language and code block if a match is found
|
||||
if match:
|
||||
return match.group(2)
|
||||
return None
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
reviewer = runtime.register_and_get(
|
||||
"ReviewerAgent",
|
||||
lambda: ReviewerAgent(
|
||||
description="Code Reviewer",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
coder = runtime.register_and_get(
|
||||
"CoderAgent",
|
||||
lambda: CoderAgent(
|
||||
description="Coder",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
reviewer=reviewer,
|
||||
),
|
||||
)
|
||||
result = await runtime.send_message(
|
||||
message=CodeWritingTask(
|
||||
task="Write a function to find the directory with the largest number of files using multi-processing."
|
||||
),
|
||||
recipient=coder,
|
||||
)
|
||||
while not result.done():
|
||||
await runtime.process_next()
|
||||
code_writing_result = result.result()
|
||||
assert isinstance(code_writing_result, CodeWritingResult)
|
||||
print("Code Writing Result:")
|
||||
print("-" * 80)
|
||||
print(f"Task:\n{code_writing_result.task}")
|
||||
print("-" * 80)
|
||||
print(f"Code:\n{code_writing_result.code}")
|
||||
print("-" * 80)
|
||||
print(f"Review:\n{code_writing_result.review}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
logging.getLogger("agnext").setLevel(logging.DEBUG)
|
||||
asyncio.run(main())
|
|
@ -12,6 +12,8 @@ to the last message in the memory and publishes the response.
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
|
@ -21,12 +23,15 @@ from agnext.components.models import (
|
|||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
|
@ -113,7 +118,7 @@ async def main() -> None:
|
|||
lambda: GroupChatParticipant(
|
||||
description="A data scientist",
|
||||
system_messages=[SystemMessage("You are a data scientist.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
agent2 = runtime.register_and_get(
|
||||
|
@ -121,7 +126,7 @@ async def main() -> None:
|
|||
lambda: GroupChatParticipant(
|
||||
description="An engineer",
|
||||
system_messages=[SystemMessage("You are an engineer.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
agent3 = runtime.register_and_get(
|
||||
|
@ -129,7 +134,7 @@ async def main() -> None:
|
|||
lambda: GroupChatParticipant(
|
||||
description="An artist",
|
||||
system_messages=[SystemMessage("You are an artist.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
|
|
@ -8,15 +8,21 @@ The reference agents handle each task independently and return the results to th
|
|||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.components.models import ChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTask:
|
||||
|
@ -111,7 +117,7 @@ async def main() -> None:
|
|||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 1",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.1),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=0.1),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
|
@ -119,7 +125,7 @@ async def main() -> None:
|
|||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 2",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.5),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=0.5),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
|
@ -127,7 +133,7 @@ async def main() -> None:
|
|||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 3",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=1.0),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=1.0),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
|
@ -139,7 +145,7 @@ async def main() -> None:
|
|||
"...synthesize these responses into a single, high-quality response... Responses from models:"
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
num_references=3,
|
||||
),
|
||||
)
|
|
@ -1,146 +0,0 @@
|
|||
"""
|
||||
This example demonstrates the mixture of agents implemented using direct
|
||||
messaging and async gathering of results.
|
||||
Mixture of agents: https://github.com/togethercomputer/moa
|
||||
|
||||
The example consists of two types of agents: reference agents and an aggregator agent.
|
||||
The aggregator agent distributes tasks to reference agents and aggregates the results.
|
||||
The reference agents handle each task independently and return the results to the aggregator agent.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTaskResult:
|
||||
result: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AggregatorTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AggregatorTaskResult:
|
||||
result: str
|
||||
|
||||
|
||||
class ReferenceAgent(TypeRoutedAgent):
|
||||
"""The reference agent that handles each task independently."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
system_messages: List[SystemMessage],
|
||||
model_client: ChatCompletionClient,
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = system_messages
|
||||
self._model_client = model_client
|
||||
|
||||
@message_handler
|
||||
async def handle_task(
|
||||
self, message: ReferenceAgentTask, cancellation_token: CancellationToken
|
||||
) -> ReferenceAgentTaskResult:
|
||||
"""Handle a task message. This method sends the task to the model and respond with the result."""
|
||||
task_message = UserMessage(content=message.task, source=self.metadata["name"])
|
||||
response = await self._model_client.create(self._system_messages + [task_message])
|
||||
assert isinstance(response.content, str)
|
||||
return ReferenceAgentTaskResult(result=response.content)
|
||||
|
||||
|
||||
class AggregatorAgent(TypeRoutedAgent):
|
||||
"""The aggregator agent that distribute tasks to reference agents and aggregates the results."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
system_messages: List[SystemMessage],
|
||||
model_client: ChatCompletionClient,
|
||||
references: List[AgentId],
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = system_messages
|
||||
self._model_client = model_client
|
||||
self._references = references
|
||||
|
||||
@message_handler
|
||||
async def handle_task(self, message: AggregatorTask, cancellation_token: CancellationToken) -> AggregatorTaskResult:
|
||||
"""Handle a task message. This method sends the task to the reference agents
|
||||
and aggregates the results."""
|
||||
ref_task = ReferenceAgentTask(task=message.task)
|
||||
results: List[ReferenceAgentTaskResult] = await asyncio.gather(
|
||||
*[await self.send_message(ref_task, ref) for ref in self._references]
|
||||
)
|
||||
combined_result = "\n\n".join([r.result for r in results])
|
||||
response = await self._model_client.create(
|
||||
self._system_messages + [UserMessage(content=combined_result, source=self.metadata["name"])]
|
||||
)
|
||||
assert isinstance(response.content, str)
|
||||
return AggregatorTaskResult(result=response.content)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
ref1 = runtime.register_and_get(
|
||||
"ReferenceAgent1",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 1",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.1),
|
||||
),
|
||||
)
|
||||
ref2 = runtime.register_and_get(
|
||||
"ReferenceAgent2",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 2",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.5),
|
||||
),
|
||||
)
|
||||
ref3 = runtime.register_and_get(
|
||||
"ReferenceAgent3",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 3",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=1.0),
|
||||
),
|
||||
)
|
||||
agg = runtime.register_and_get(
|
||||
"AggregatorAgent",
|
||||
lambda: AggregatorAgent(
|
||||
description="Aggregator Agent",
|
||||
system_messages=[
|
||||
SystemMessage(
|
||||
"...synthesize these responses into a single, high-quality response... Responses from models:"
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
references=[ref1, ref2, ref3],
|
||||
),
|
||||
)
|
||||
result = await runtime.send_message(AggregatorTask(task="What are something fun to do in SF?"), agg)
|
||||
while result.done() is False:
|
||||
await runtime.process_next()
|
||||
print(result.result())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
logging.getLogger("agnext").setLevel(logging.DEBUG)
|
||||
asyncio.run(main())
|
|
@ -32,7 +32,9 @@ to sample a random number of neighbors' responses to use.
|
|||
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
@ -43,12 +45,15 @@ from agnext.components.models import (
|
|||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
|
@ -209,7 +214,7 @@ async def main(question: str) -> None:
|
|||
runtime.register(
|
||||
"MathSolver1",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver2", "MathSolver4"],
|
||||
max_round=3,
|
||||
),
|
||||
|
@ -217,7 +222,7 @@ async def main(question: str) -> None:
|
|||
runtime.register(
|
||||
"MathSolver2",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver1", "MathSolver3"],
|
||||
max_round=3,
|
||||
),
|
||||
|
@ -225,7 +230,7 @@ async def main(question: str) -> None:
|
|||
runtime.register(
|
||||
"MathSolver3",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver2", "MathSolver4"],
|
||||
max_round=3,
|
||||
),
|
||||
|
@ -233,7 +238,7 @@ async def main(question: str) -> None:
|
|||
runtime.register(
|
||||
"MathSolver4",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver1", "MathSolver3"],
|
||||
max_round=3,
|
||||
),
|
|
@ -12,6 +12,8 @@ list of function calls.
|
|||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
|
@ -24,13 +26,16 @@ from agnext.components.models import (
|
|||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.components.tools import PythonCodeExecutionTool, Tool
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolExecutionTask:
|
||||
|
@ -130,7 +135,7 @@ async def main() -> None:
|
|||
lambda: ToolEnabledAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=tools,
|
||||
),
|
||||
)
|
||||
|
|
|
@ -13,6 +13,8 @@ the results back to the tool use agent.
|
|||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
@ -26,13 +28,16 @@ from agnext.components.models import (
|
|||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.components.tools import PythonCodeExecutionTool, Tool
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolExecutionTask:
|
||||
|
@ -192,7 +197,7 @@ async def main() -> None:
|
|||
lambda: ToolUseAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=tools,
|
||||
),
|
||||
)
|
||||
|
|
|
@ -10,15 +10,16 @@ import sys
|
|||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import (
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
)
|
||||
from agnext.components.tools import FunctionTool
|
||||
from typing_extensions import Annotated
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__))))
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from coding_one_agent_direct import AIResponse, ToolEnabledAgent, UserRequest
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
async def get_stock_price(ticker: str, date: Annotated[str, "The date in YYYY/MM/DD format."]) -> float:
|
||||
|
@ -36,7 +37,7 @@ async def main() -> None:
|
|||
lambda: ToolEnabledAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=[
|
||||
# Define a tool that gets the stock price.
|
||||
FunctionTool(
|
||||
|
|
|
@ -46,6 +46,7 @@ dependencies = [
|
|||
"pytest-xdist",
|
||||
"pytest-mock",
|
||||
"grpcio-tools",
|
||||
"markdownify",
|
||||
]
|
||||
|
||||
[tool.hatch.envs.default.extra-scripts]
|
||||
|
|
Loading…
Reference in New Issue