mirror of https://github.com/microsoft/autogen.git
Remove chat layer, move it to examples/common (#125)
This commit is contained in:
parent
059550648e
commit
44443c8aad
|
@ -8,8 +8,6 @@
|
|||
- `core` are the the foundational generic interfaces upon which all else is built. This module must not depend on any other module.
|
||||
- `application` are implementations of core components that are used to compose an application.
|
||||
- `components` are the building blocks for creating agents.
|
||||
- `chat` are concrete implementations of agents and multi-agent interactions.
|
||||
It is used for creating demos and experimenting with multi-agent design patterns.
|
||||
|
||||
## Development
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ Generally, messages are one of:
|
|||
|
||||
Messages are purely data, and should not contain any logic.
|
||||
|
||||
### Required Message Types
|
||||
<!-- ### Required Message Types
|
||||
|
||||
At the core framework level there is *no requirement* of which message types are handled by an agent. However, some behavior patterns require agents understand certain message types. For an agent to participate in these patterns, it must understand any such required message types.
|
||||
|
||||
|
@ -34,7 +34,7 @@ Agents should document which message types they can handle. Orchestrating agents
|
|||
|
||||
```{tip}
|
||||
An important part of designing an agent or choosing which agents to use is understanding which message types are required by the agents you are using.
|
||||
```
|
||||
``` -->
|
||||
|
||||
## Communication
|
||||
|
||||
|
|
|
@ -3,13 +3,7 @@
|
|||
Memory is a collection of data corresponding to the conversation history
|
||||
of an agent.
|
||||
Data in meory can be just a simple list of all messages,
|
||||
or one which provides a view of the last N messages
|
||||
({py:class}`agnext.chat.memory.BufferedChatMemory`).
|
||||
|
||||
Built-in memory implementations are:
|
||||
|
||||
- {py:class}`agnext.chat.memory.BufferedChatMemory`
|
||||
- {py:class}`agnext.chat.memory.HeadAndTailChatMemory`
|
||||
or one which provides a view of the last N messages.
|
||||
|
||||
To create a custom memory implementation, you need to subclass the
|
||||
{py:class}`agnext.components.memory.ChatMemory` protocol class and implement
|
||||
|
@ -17,3 +11,53 @@ all its methods.
|
|||
For example, you can use [LLMLingua](https://github.com/microsoft/LLMLingua)
|
||||
to create a custom memory implementation that provides a compressed
|
||||
view of the conversation history.
|
||||
|
||||
Here is an example of a custom memory implementation that keeps a view of the
|
||||
last N messages:
|
||||
|
||||
```python
|
||||
from typing import Any, List, Mapping
|
||||
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import FunctionExecutionResultMessage, LLMMessage
|
||||
|
||||
|
||||
class BufferedChatMemory(ChatMemory[LLMMessage]):
|
||||
"""A buffered chat memory that keeps a view of the last n messages,
|
||||
where n is the buffer size. The buffer size is set at initialization.
|
||||
|
||||
Args:
|
||||
buffer_size (int): The size of the buffer.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, buffer_size: int) -> None:
|
||||
self._messages: List[LLMMessage] = []
|
||||
self._buffer_size = buffer_size
|
||||
|
||||
async def add_message(self, message: LLMMessage) -> None:
|
||||
"""Add a message to the memory."""
|
||||
self._messages.append(message)
|
||||
|
||||
async def get_messages(self) -> List[LLMMessage]:
|
||||
"""Get at most `buffer_size` recent messages."""
|
||||
messages = self._messages[-self._buffer_size :]
|
||||
# Handle the first message is a function call result message.
|
||||
if messages and isinstance(messages[0], FunctionExecutionResultMessage):
|
||||
# Remove the first message from the list.
|
||||
messages = messages[1:]
|
||||
return messages
|
||||
|
||||
async def clear(self) -> None:
|
||||
"""Clear the message memory."""
|
||||
self._messages = []
|
||||
|
||||
def save_state(self) -> Mapping[str, Any]:
|
||||
return {
|
||||
"messages": [message for message in self._messages],
|
||||
"buffer_size": self._buffer_size,
|
||||
}
|
||||
|
||||
def load_state(self, state: Mapping[str, Any]) -> None:
|
||||
self._messages = state["messages"]
|
||||
self._buffer_size = state["buffer_size"]
|
||||
|
|
|
@ -10,10 +10,3 @@ like software development.
|
|||
You can implement any multi-agent pattern using AGNext agents, which
|
||||
communicate with each other using messages through the agent runtime
|
||||
(see {doc}`/core-concepts/runtime` and {doc}`/core-concepts/agent`).
|
||||
To make life easier, AGNext provides built-in patterns
|
||||
in {py:mod}`agnext.chat.patterns` that you can use to build
|
||||
multi-agent systems quickly.
|
||||
|
||||
To read about the built-in patterns, see the following guides:
|
||||
|
||||
1. {doc}`/guides/group-chat-coder-reviewer`
|
||||
|
|
|
@ -1,314 +0,0 @@
|
|||
# Group Chat with Coder and Reviewer Agents
|
||||
|
||||
Group Chat from [AutoGen](https://aka.ms/autogen-paper) is a
|
||||
powerful multi-agent pattern support by AGNext.
|
||||
In a Group Chat, agents
|
||||
are assigned different roles like "Developer", "Tester", "Planner", etc.,
|
||||
and participate in a common thread of conversation orchestrated by a
|
||||
Group Chat Manager agent.
|
||||
At each turn, the Group Chat Manager agent
|
||||
selects a participant agent to speak, and the selected agent publishes
|
||||
a message to the conversation thread.
|
||||
|
||||
In this guide, we use using the {py:class}`agnext.chat.patterns.GroupChatManager`
|
||||
and {py:class}`agnext.chat.agents.ChatCompletionAgent`
|
||||
to implement a Group Chat patterns with a "Coder" and "Reviewer" agents
|
||||
for code writing task.
|
||||
|
||||
First, import the necessary modules and classes:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.agents import ChatCompletionAgent
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.patterns import GroupChatManager
|
||||
from agnext.chat.types import TextMessage
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
```
|
||||
|
||||
Next, let's create the runtime:
|
||||
|
||||
```python
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
```
|
||||
|
||||
Now, let's register the participant agents using the
|
||||
{py:class}`agnext.chat.agents.ChatCompletionAgent` class.
|
||||
The agents do not use any tools here and have a short memory of
|
||||
last 10 messages:
|
||||
|
||||
```python
|
||||
coder = runtime.register_and_get_proxy(
|
||||
"Coder",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="An agent that writes code",
|
||||
system_messages=[
|
||||
SystemMessage(
|
||||
"You are a coder. You can write code to solve problems.\n"
|
||||
"Work with the reviewer to improve your code."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
),
|
||||
)
|
||||
reviewer = runtime.register_and_get_proxy(
|
||||
"Reviewer",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="An agent that reviews code",
|
||||
system_messages=[
|
||||
SystemMessage(
|
||||
"You are a code reviewer. You focus on correctness, efficiency and safety of the code.\n"
|
||||
"Respond using the following format:\n"
|
||||
"Code Review:\n"
|
||||
"Correctness: <Your comments>\n"
|
||||
"Efficiency: <Your comments>\n"
|
||||
"Safety: <Your comments>\n"
|
||||
"Approval: <APPROVE or REVISE>\n"
|
||||
"Suggested Changes: <Your comments>"
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
Let's register the Group Chat Manager agent
|
||||
({py:class}`agnext.chat.patterns.GroupChatManager`)
|
||||
that orchestrates the conversation.
|
||||
|
||||
```python
|
||||
runtime.register(
|
||||
"Manager",
|
||||
lambda: GroupChatManager(
|
||||
description="A manager that orchestrates a back-and-forth converation between a coder and a reviewer.",
|
||||
participants=[coder.id, reviewer.id], # The order of the participants indicates the order of speaking.
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
termination_word="APPROVE",
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
In this example, the Group Chat Manager agent selects the coder to speak first,
|
||||
and selects the next speaker in round-robin fashion based on the order of the participants.
|
||||
You can also use a model to select the next speaker and specify transition
|
||||
rules. See {py:class}`agnext.chat.patterns.GroupChatManager` for more details.
|
||||
|
||||
Finally, let's start the conversation by publishing a task message to the runtime:
|
||||
|
||||
```python
|
||||
async def main() -> None:
|
||||
runtime.publish_message(
|
||||
TextMessage(
|
||||
content="Write a Python script that find near-duplicate paragraphs in a directory of many text files. "
|
||||
"Output the file names, line numbers and the similarity score of the near-duplicate paragraphs. ",
|
||||
source="Human",
|
||||
)
|
||||
)
|
||||
while True:
|
||||
await runtime.process_next()
|
||||
await asyncio.sleep(1)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
The complete code example is available in `examples/coder_reviewer.py`.
|
||||
Below is the output of a run of the group chat example:
|
||||
|
||||
````none
|
||||
--------------------------------------------------------------------------------
|
||||
Human: Write a Python script that find near-duplicate paragraphs in a directory of many text files. Output the file names, line numbers and the similarity score of the near-duplicate paragraphs.
|
||||
--------------------------------------------------------------------------------
|
||||
Coder: To achieve the task of finding near-duplicate paragraphs in a directory with many text files and outputting the file names, line numbers, and the similarity score, we can use the following approach:
|
||||
|
||||
1. **Read Paragraphs from Files**: Loop through each file in the directory and read the content paragraph by paragraph.
|
||||
2. **Text Preprocessing**: Clean and preprocess the text data (e.g., lowercasing, removing punctuation).
|
||||
3. **Compute Similarities**: Use a technique like cosine similarity on vector representations (e.g., TF-IDF) of the paragraphs to find similarities.
|
||||
4. **Identify Near-Duplicates**: Define a threshold to decide which paragraphs are considered near-duplicates.
|
||||
5. **Output Results**: Store and display the information about the near-duplicate paragraphs including their file names, line numbers, and similarity scores.
|
||||
|
||||
Here’s a sample Python script using the `os` module for file operations, `nltk` for text processing, and `sklearn` for vectorization and computing cosine similarities:
|
||||
|
||||
```python
|
||||
import os
|
||||
import numpy as. np
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
from nltk.tokenize import sent_tokenize, word_tokenize
|
||||
from nltk.corpus import stopwords
|
||||
import string
|
||||
|
||||
def preprocess_text(text):
|
||||
"""Preprocess text by removing punctuation and stop words, and lowercasing."""
|
||||
text = text.lower()
|
||||
text = ''.join([char for char in text if char not in string.punctuation])
|
||||
words = word_tokenize(text)
|
||||
stop_words = set(stopwords.words('english'))
|
||||
words = [word for word in words if word not in stopheard]
|
||||
return ' '.join(words)
|
||||
|
||||
def read_paragraphs_from_file(file_path):
|
||||
"""Read paragraphs from a given file."""
|
||||
with open(file_path, 'r', encoding='utf-8') as file:
|
||||
content = file.read()
|
||||
paragraphs = [para.strip() for para in content.split('\n') if para.strip()]
|
||||
return paragraphs
|
||||
|
||||
def find_near_duplicates(directory, similarity_threshold=0.8):
|
||||
"""Find near-duplicate paragraphs across files in the given directory."""
|
||||
files_data = []
|
||||
for root, _, files in os.walk(directory):
|
||||
for f in files:
|
||||
file_path = os.path.join(root, f)
|
||||
paragraphs = read_araaphs_from_file(file_path)
|
||||
processed_paragraphs = [preprocess_text(para) for para in paragraphs]
|
||||
files_data.append((f, paragraphs, processed_paragraphs))
|
||||
|
||||
# Vectorizing text data
|
||||
all_processed_paras = [data for _, _, processed_paras in files_data for data in processed_paras]
|
||||
vectorizer = TfidfVectorizer()
|
||||
tfidf_matrix = vectorizer.fit_transform(all_processed_paras)
|
||||
|
||||
# Compute cosine similarity
|
||||
cos_similarity_matrix = cosine_similarity(tfidf_matrix)
|
||||
|
||||
# Checking for near-duplicates based on threshold
|
||||
for i, (file_i, paragraphs_i, _) in enumerate(fileElot_data):
|
||||
for j in range(i + 1, len(files_data)):
|
||||
file_j, paragraphs_j, _ = dies_data[j]
|
||||
for index_i, para_i in enumerate(paragrophs_i):
|
||||
for index_j, para_j in enumerate(paragraphs_j):
|
||||
sim_score = cos_similarity_matrix[i * len(paragraphs_i) +foendez_i][j * xen(diruhspchuc _ dia] hmide wyst é)
|
||||
if sim_ctore >= pepparturr_thresheid:
|
||||
overall_index_i = sum(len(dp_cata[k-apached]) for k intren(i, tlen angmeapl sagrod_u sdisterf chaperrat:
|
||||
print(f"{file_i} (para {index_i+1}), {file_j} (lgrafonen{iad ef + , SIM enchantisrowREeteraf): {sidotta{(": . bridgescodensorphiae:
|
||||
)
|
||||
if __name__ == '__main__':
|
||||
DIRECTORY_PATH = 'path/to/directory'
|
||||
find_nearduplmany czup costsD etgt*tyn dup examineyemitour EgoreOtyp als
|
||||
```
|
||||
|
||||
This script accomplishes the task as outlined. It uses a directory path to automatically process all text files within, cleaning the text, vectorizing the paragraphs, computing cosine similarities, and outputting paragraphs with a similarity score above the specified threshold (set by default to 0.8, but can be adjusted). Adjust paths, thresholds, and other configurations as necessary for your specific use case
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Reviewer: There are several syntax and logic issues within the provided code that need to be addressed before approving it:
|
||||
|
||||
1. **Syntax Mistakes:**
|
||||
- In the import statement, `numpy as. np` should be corrected to `import numpy as np`.
|
||||
- Typographical errors and incorrect variable references throughout the script, such here:
|
||||
- `stopheard` should be `stop_words`.
|
||||
- `read_araaphs_from_file` should be `read_paragraphs_from_file`.
|
||||
- `preprocess_text` includes `stopheard`, which should be corrected.
|
||||
- `fileElot_data` typing error; it should be `files_data`.
|
||||
- `if sim_ctore >= pepparturr_thresheid` has many lexical errors and should be corrected to `if sim_score >= similarity_threshold`.
|
||||
- `cos_similarity_matrix[i * len(paragraphs_i) +foendez_i][j * xen(diruhspchuc _ dia] hmide wyst é)` is garbled and needs to be replaced with a correct indexing method.
|
||||
- Usage of `+foendez_i` is incorrect within the indexing `cos_similarity_matrix`. This needs correction based on proper iterative indexing.
|
||||
- Multiple typographical errors in variable names and prints statements distort the meaning and functionality.
|
||||
|
||||
2. **Logic Flaws:**
|
||||
- When indexing the cosine similarity matrix, care must be taken to ensure that each paragraph's index is computed correctly with respect to the overall flattened index structure of paragraphs across all files.
|
||||
- Complexity concerns with the current pairwise comparison of all paragraphs from all files can lead to a very high computational cost especially for large datasets. Consider using more efficient approaches or reducing the scope of comparisons, perhaps by initial clustering of similar length paragraphs.
|
||||
- The error handling mechanism (e.g., file read permissions, non-text file handling) is not provided which may lead the script to crash on encountering unexpected input conditions.
|
||||
|
||||
3. **Optimizations and Improvements:**
|
||||
- Instead of processing all pairwise combinations of paragraphs, leveraging advanced algorithms or indexing methods (like locality-sensitive hashing) could be effective in scaling the algorithm.
|
||||
- Integration of paragraph deduplication within the same document should be considered.
|
||||
- Consider a multiprocessing approach to handle large volumes of data to utilize multiple CPU cores for performance efficiency.
|
||||
|
||||
4. **Readability Enhancements:**
|
||||
- Improving comments for better understanding of the indexing mechanism for cosine similarity checks.
|
||||
- Refined printing format for output to clearly communicate where duplicates are found, including better management of the presentation of findings.
|
||||
|
||||
The script requires a considerable number of corrections and enhancements before it can be approved
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Coder: Thank you for the thorough review. I will address the issues and make the necessary revisions to the code.
|
||||
|
||||
Here's the revised Python script with syntax corrections, logic improvements, error handling, and readability enhancements:
|
||||
|
||||
```python
|
||||
import os
|
||||
import numpy as np
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
from nltk.tokenize import word_tokenize
|
||||
from nltk.corpus import stopwords
|
||||
import string
|
||||
|
||||
def preprocess_text(text):
|
||||
"""Preprocess text by removing punctuation and stopwords, and lowercasing."""
|
||||
text = text.lower()
|
||||
text = ''.join(char for char in text if char not in string.punctuation)
|
||||
words = word_tokenize(text)
|
||||
stop_words = set(stopwords.words('english'))
|
||||
words = [word for word in words if word not in stop_words]
|
||||
return ' '.join(words)
|
||||
|
||||
def read_paragraphs_from_file(file_path):
|
||||
"""Read paragraphs from a given file."""
|
||||
try:
|
||||
with open(file_path, 'r', encoding='utf-8') as file:
|
||||
content = file.read()
|
||||
paragraphs = [para.strip() for para in content.split('\n') if para.strip()]
|
||||
return paragraphs
|
||||
except IOError:
|
||||
print(f"Error reading file: {file_path}")
|
||||
return []
|
||||
|
||||
def find_near_duplicates(directory, similarity_threshold=0.8):
|
||||
"""Find near-duplicate paragraphs across files in the given directory."""
|
||||
files_data = []
|
||||
for root, _, files in os.walk(directory):
|
||||
for filename in files:
|
||||
file_path = os.path.join(root, filename)
|
||||
paragraphs = read_paragraphs_from_file(file_path)
|
||||
processed_paragraphs = [preprocess_text(para) for para in paragraphs]
|
||||
files_data.append((filename, paragraphs, processed_paragraphs))
|
||||
|
||||
# Vectorizing text data
|
||||
all_processed_paras = [para for _, _, processed_paras in files_data for para in processed_paras]
|
||||
if not all_processed_paras:
|
||||
return
|
||||
vectorizer = TfidfVectorizer()
|
||||
tfidf_matrix = vectorizer.fit_transform(all_processed_paras)
|
||||
|
||||
# Compute cosine similarity
|
||||
cos_similarity_matrix = cosine_similarity(tfidf_matrix)
|
||||
|
||||
# Checking for near-duplicates based on threshold
|
||||
n = sum(len(paras) for _, paras, _ in files_data) # Total number of paragraphs
|
||||
idx = 0
|
||||
for i, (file_i, paragraphs_i, _) in enumerate(files_data):
|
||||
for index_i, para_i in enumerate(paragraphs_i):
|
||||
global_idx_i = idx + index_i
|
||||
for j in range(i, len(files_data)):
|
||||
offset = 0 if i == j else len(files_data[i][1]) # Avoid duplicate comparisons within the same file
|
||||
start_idx = idx + offset if j == i else sum(len(files_data[k][1]) for k in range(j))
|
||||
file_j, paragraphs_j, _ = files_data[j]
|
||||
for index_j, para_j in enumerate(paragraphs_j):
|
||||
global_idx_j = start_idx + index_j
|
||||
if global_idx_i < global_idx_j:
|
||||
sim_score = cos_similarity_matrix[global_idx_i, global_idx_j]
|
||||
if sim_score >= similarity_threshold:
|
||||
print(f"{file_i} (para {index_i+1}) and {file_j} (para {index_j+1}), Similarity Score: {sim_score:.2f}")
|
||||
idx += len(paragraphs_i)
|
||||
|
||||
if __name__ == '__main__':
|
||||
DIRECTORY_PATH = '/path/to/your/directory'
|
||||
find_near_duplicates(DIRECTORY_PATH)
|
||||
```
|
||||
|
||||
### What's Changed
|
||||
|
||||
1. **Syntax and Typo Corrections**: Corrected lexically incorrect variables and function calls.
|
||||
2. **Error Handling**: Added error handling for file read operations.
|
||||
3. **Efficiency and Logic Correction**: Adjusted how the indices for similarity matrix accesses are calculated to avoid redundant comparisons and incorrect ranges.
|
||||
4. **Readability**: Improved comments and output formatting for better understanding and readability of the output results.
|
||||
|
||||
This script should now work more reliably and efficiently, with clearer output and handling of typical file access issues. Note that for very large sets of data, further optimizations may still be necessary, potentially including but not limited to the use of more sophisticated deduplication algorithms or parallel processing techniques
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Reviewer: APPROVE
|
||||
````
|
|
@ -7,9 +7,9 @@ The decorator {py:func}`agnext.components.message_handler` should be added to fu
|
|||
- The function must be an `async` function.
|
||||
- The function must be decorated with the `message_handler` decorator.
|
||||
- The function must have exactly 3 arguments.
|
||||
- `self`
|
||||
- `message`: The message to be handled, this must be type hinted with the message type that it is intended to handle.
|
||||
- `cancellation_token`: A {py:class}`agnext.core.CancellationToken` object
|
||||
- `self`
|
||||
- `message`: The message to be handled, this must be type hinted with the message type that it is intended to handle.
|
||||
- `cancellation_token`: A {py:class}`agnext.core.CancellationToken` object
|
||||
- The function must be type hinted with what message types it can return.
|
||||
|
||||
```{tip}
|
||||
|
@ -23,10 +23,25 @@ The following is an example of a simple agent that broadcasts the fact it receiv
|
|||
One important thing to point out is that when an agent is constructed it must be passed a runtime object. This allows the agent to communicate with other agents via the runtime.
|
||||
|
||||
```python
|
||||
from agnext.chat.types import MultiModalMessage, Reset, TextMessage
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Union
|
||||
from agnext.components import TypeRoutedAgent, message_handler, Image
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
|
||||
@dataclass
|
||||
class TextMessage:
|
||||
content: str
|
||||
source: str
|
||||
|
||||
@dataclass
|
||||
class MultiModalMessage:
|
||||
content: List[Union[str, Image]]
|
||||
source: str
|
||||
|
||||
@dataclass
|
||||
class Reset:
|
||||
pass
|
||||
|
||||
|
||||
class MyAgent(TypeRoutedAgent):
|
||||
def __init__(self):
|
||||
|
|
|
@ -16,9 +16,8 @@ AGNext's developer API consists of the following layers:
|
|||
- :doc:`core <reference/agnext.core>` - The core interfaces that defines agent and runtime.
|
||||
- :doc:`application <reference/agnext.application>` - Implementations of the runtime and other modules (e.g., logging) for building applications.
|
||||
- :doc:`components <reference/agnext.components>` - Interfaces and implementations for agents, models, memory, and tools.
|
||||
- :doc:`chat <reference/agnext.chat>` - High-level API for creating demos and experimenting with multi-agent patterns. It offers pre-built agents, patterns, message types, and memory stores.
|
||||
|
||||
|
||||
To get you started quickly, we also offers [a suite of examples](https://github.com/microsoft/agnext/tree/main/python/examples) to demonstrate the core concepts.
|
||||
|
||||
.. toctree::
|
||||
:caption: Getting started
|
||||
|
@ -45,7 +44,6 @@ AGNext's developer API consists of the following layers:
|
|||
:hidden:
|
||||
|
||||
guides/type-routed-agent
|
||||
guides/group-chat-coder-reviewer
|
||||
guides/azure-openai-with-aad-auth
|
||||
guides/termination-with-intervention
|
||||
|
||||
|
@ -56,7 +54,6 @@ AGNext's developer API consists of the following layers:
|
|||
|
||||
reference/agnext.components
|
||||
reference/agnext.application
|
||||
reference/agnext.chat
|
||||
reference/agnext.core
|
||||
|
||||
.. toctree::
|
||||
|
|
|
@ -2,6 +2,12 @@
|
|||
|
||||
This directory contains examples and demos of how to use AGNext.
|
||||
|
||||
- `common`: Contains common implementations and utilities used by the examples.
|
||||
- `core`: Contains examples that illustrate the core concepts of AGNext.
|
||||
- `tool-use`: Contains examples that illustrate tool use in AGNext.
|
||||
- `patterns`: Contains examples that illustrate how multi-agent patterns can be implemented in AGNext.
|
||||
- `demos`: Contains interactive demos that showcase applications that can be built using AGNext.
|
||||
|
||||
## Core examples
|
||||
|
||||
We provide examples to illustrate the core concepts of AGNext:
|
||||
|
@ -42,7 +48,7 @@ We provide interactive demos that showcase applications that can be built using
|
|||
to implement the reflection pattern for image generation.
|
||||
- [`software_consultancy.py`](demos/software_consultancy.py): a demonstration of multi-agent interaction using
|
||||
the group chat pattern.
|
||||
- [`chest_game.py`](tool-use/chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use.
|
||||
- [`chest_game.py`](demos/chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use.
|
||||
|
||||
## Running the examples and demos
|
||||
|
||||
|
@ -52,24 +58,15 @@ First, you need a shell with AGNext and the examples dependencies installed. To
|
|||
hatch shell
|
||||
```
|
||||
|
||||
To run an example, just run the corresponding Python script. For example, to run the `coder_reviewer_pub_sub.py` example, run:
|
||||
To run an example, just run the corresponding Python script. For example:
|
||||
|
||||
```bash
|
||||
hatch shell
|
||||
python core/coder_reviewer.py
|
||||
python core/one_agent_direct.py
|
||||
```
|
||||
|
||||
Or simply:
|
||||
|
||||
```bash
|
||||
hatch run python core/coder_reviewer.py
|
||||
hatch run python core/one_agent_direct.py
|
||||
```
|
||||
|
||||
To enable logging, turn on verbose mode by setting `--verbose` flag:
|
||||
|
||||
```bash
|
||||
hatch run python core/coder_reviewer.py --verbose
|
||||
```
|
||||
|
||||
By default the log file is saved in the same directory with the same filename
|
||||
as the script, e.g., "coder_reviewer.log".
|
||||
|
|
|
@ -2,20 +2,21 @@ import asyncio
|
|||
import json
|
||||
from typing import Any, Coroutine, Dict, List, Mapping, Sequence, Tuple
|
||||
|
||||
from ...components import (
|
||||
from agnext.components import (
|
||||
FunctionCall,
|
||||
TypeRoutedAgent,
|
||||
message_handler,
|
||||
)
|
||||
from ...components.memory import ChatMemory
|
||||
from ...components.models import (
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import (
|
||||
ChatCompletionClient,
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
SystemMessage,
|
||||
)
|
||||
from ...components.tools import Tool
|
||||
from ...core import AgentId, CancellationToken
|
||||
from agnext.components.tools import Tool
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
from ..types import (
|
||||
FunctionCallMessage,
|
||||
Message,
|
|
@ -1,14 +1,14 @@
|
|||
from typing import Literal
|
||||
|
||||
import openai
|
||||
|
||||
from ...components import (
|
||||
from agnext.components import (
|
||||
Image,
|
||||
TypeRoutedAgent,
|
||||
message_handler,
|
||||
)
|
||||
from ...components.memory import ChatMemory
|
||||
from ...core import CancellationToken
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
from ..types import (
|
||||
Message,
|
||||
MultiModalMessage,
|
|
@ -1,11 +1,11 @@
|
|||
from typing import Any, Callable, List, Mapping
|
||||
|
||||
import openai
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.core import CancellationToken
|
||||
from openai import AsyncAssistantEventHandler
|
||||
from openai.types.beta import AssistantResponseFormatParam
|
||||
|
||||
from ...components import TypeRoutedAgent, message_handler
|
||||
from ...core import CancellationToken
|
||||
from ..types import PublishNow, Reset, RespondNow, ResponseFormat, TextMessage
|
||||
|
||||
|
|
@ -1,7 +1,8 @@
|
|||
import asyncio
|
||||
|
||||
from ...components import TypeRoutedAgent, message_handler
|
||||
from ...core import CancellationToken
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
from ..types import PublishNow, TextMessage
|
||||
|
||||
|
|
@ -1,7 +1,8 @@
|
|||
from typing import Any, List, Mapping
|
||||
|
||||
from ...components.memory import ChatMemory
|
||||
from ...components.models import FunctionExecutionResultMessage
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import FunctionExecutionResultMessage
|
||||
|
||||
from ..types import Message
|
||||
|
||||
|
|
@ -1,7 +1,8 @@
|
|||
from typing import Any, List, Mapping
|
||||
|
||||
from ...components.memory import ChatMemory
|
||||
from ...components.models import FunctionExecutionResultMessage
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import FunctionExecutionResultMessage
|
||||
|
||||
from ..types import FunctionCallMessage, Message, TextMessage
|
||||
|
||||
|
|
@ -1,10 +1,11 @@
|
|||
import logging
|
||||
from typing import Any, Callable, List, Mapping
|
||||
|
||||
from ...components import TypeRoutedAgent, message_handler
|
||||
from ...components.memory import ChatMemory
|
||||
from ...components.models import ChatCompletionClient
|
||||
from ...core import AgentId, AgentProxy, CancellationToken
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import ChatCompletionClient
|
||||
from agnext.core import AgentId, AgentProxy, CancellationToken
|
||||
|
||||
from ..types import (
|
||||
Message,
|
||||
MultiModalMessage,
|
|
@ -3,9 +3,10 @@
|
|||
import re
|
||||
from typing import Dict, List
|
||||
|
||||
from ...components.memory import ChatMemory
|
||||
from ...components.models import ChatCompletionClient, SystemMessage
|
||||
from ...core import AgentProxy
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import ChatCompletionClient, SystemMessage
|
||||
from agnext.core import AgentProxy
|
||||
|
||||
from ..types import Message, TextMessage
|
||||
|
||||
|
|
@ -1,8 +1,9 @@
|
|||
import json
|
||||
from typing import Any, Sequence, Tuple
|
||||
|
||||
from ...components import TypeRoutedAgent, message_handler
|
||||
from ...core import AgentId, AgentRuntime, CancellationToken
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.core import AgentId, AgentRuntime, CancellationToken
|
||||
|
||||
from ..types import Reset, RespondNow, ResponseFormat, TextMessage
|
||||
|
||||
__all__ = ["OrchestratorChat"]
|
|
@ -4,8 +4,8 @@ from dataclasses import dataclass, field
|
|||
from enum import Enum
|
||||
from typing import List, Union
|
||||
|
||||
from ..components import FunctionCall, Image
|
||||
from ..components.models import FunctionExecutionResultMessage
|
||||
from agnext.components import FunctionCall, Image
|
||||
from agnext.components.models import FunctionExecutionResultMessage
|
||||
|
||||
|
||||
@dataclass(kw_only=True)
|
|
@ -1,14 +1,14 @@
|
|||
from typing import List, Optional, Union
|
||||
|
||||
from typing_extensions import Literal
|
||||
|
||||
from ..components.models import (
|
||||
from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from typing_extensions import Literal
|
||||
|
||||
from .types import (
|
||||
FunctionCallMessage,
|
||||
Message,
|
|
@ -6,15 +6,12 @@ import asyncio
|
|||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from typing import List
|
||||
|
||||
import aiofiles
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.agents import OpenAIAssistantAgent
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.patterns._group_chat_manager import GroupChatManager
|
||||
from agnext.chat.types import PublishNow, TextMessage
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.core import AgentId, AgentRuntime, CancellationToken
|
||||
from openai import AsyncAssistantEventHandler
|
||||
|
@ -23,6 +20,13 @@ from openai.types.beta.threads import Message, Text, TextDelta
|
|||
from openai.types.beta.threads.runs import RunStep, RunStepDelta
|
||||
from typing_extensions import override
|
||||
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
|
||||
|
||||
from common.agents import OpenAIAssistantAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.types import PublishNow, TextMessage
|
||||
|
||||
sep = "-" * 50
|
||||
|
||||
|
||||
|
|
|
@ -6,16 +6,17 @@ import os
|
|||
import sys
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.types import Message, TextMessage
|
||||
from agnext.chat.utils import convert_messages_to_llm_messages
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.types import Message, TextMessage
|
||||
from common.utils import convert_messages_to_llm_messages
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
|
|
@ -5,19 +5,24 @@ and make moves, and using a group chat manager to orchestrate the conversation."
|
|||
import argparse
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Annotated, Literal
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.agents._chat_completion_agent import ChatCompletionAgent
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.patterns._group_chat_manager import GroupChatManager
|
||||
from agnext.chat.types import TextMessage
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
from chess import BLACK, SQUARE_NAMES, WHITE, Board, Move
|
||||
from chess import piece_name as get_piece_name
|
||||
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
|
||||
|
||||
from common.agents._chat_completion_agent import ChatCompletionAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.types import TextMessage
|
||||
|
||||
|
||||
def validate_turn(board: Board, player: Literal["white", "black"]) -> None:
|
||||
"""Validate that it is the player's turn to move."""
|
||||
|
|
|
@ -6,14 +6,15 @@ import sys
|
|||
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.agents import ChatCompletionAgent, ImageGenerationAgent
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.patterns._group_chat_manager import GroupChatManager
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.core import AgentRuntime
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.agents import ChatCompletionAgent, ImageGenerationAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
|
|
@ -17,9 +17,6 @@ import aiofiles
|
|||
import aiohttp
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.agents import ChatCompletionAgent
|
||||
from agnext.chat.memory import HeadAndTailChatMemory
|
||||
from agnext.chat.patterns._group_chat_manager import GroupChatManager
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
|
@ -28,7 +25,11 @@ from tqdm import tqdm
|
|||
from typing_extensions import Annotated
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.agents import ChatCompletionAgent
|
||||
from common.memory import HeadAndTailChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
|
|
|
@ -1,9 +1,20 @@
|
|||
import asyncio
|
||||
import os
|
||||
import random
|
||||
import sys
|
||||
from asyncio import Future
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.chat.types import (
|
||||
from agnext.components import Image, TypeRoutedAgent, message_handler
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
from textual.app import App, ComposeResult
|
||||
from textual.containers import ScrollableContainer
|
||||
from textual.widgets import Button, Footer, Header, Input, Markdown, Static
|
||||
from textual_imageview.viewer import ImageViewer
|
||||
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
|
||||
|
||||
from common.types import (
|
||||
MultiModalMessage,
|
||||
PublishNow,
|
||||
RespondNow,
|
||||
|
@ -11,12 +22,6 @@ from agnext.chat.types import (
|
|||
ToolApprovalRequest,
|
||||
ToolApprovalResponse,
|
||||
)
|
||||
from agnext.components import Image, TypeRoutedAgent, message_handler
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
from textual.app import App, ComposeResult
|
||||
from textual.containers import ScrollableContainer
|
||||
from textual.widgets import Button, Footer, Header, Input, Markdown, Static
|
||||
from textual_imageview.viewer import ImageViewer
|
||||
|
||||
|
||||
class ChatAppMessage(Static):
|
||||
|
|
|
@ -3,23 +3,26 @@ import asyncio
|
|||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Callable
|
||||
|
||||
import openai
|
||||
from agnext.application import (
|
||||
SingleThreadedAgentRuntime,
|
||||
)
|
||||
from agnext.chat.agents._chat_completion_agent import ChatCompletionAgent
|
||||
from agnext.chat.agents._oai_assistant import OpenAIAssistantAgent
|
||||
from agnext.chat.memory import BufferedChatMemory
|
||||
from agnext.chat.patterns._orchestrator_chat import OrchestratorChat
|
||||
from agnext.chat.types import TextMessage
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.tools import BaseTool
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
from pydantic import BaseModel, Field
|
||||
from tavily import TavilyClient # type: ignore
|
||||
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
|
||||
|
||||
from common.agents import ChatCompletionAgent, OpenAIAssistantAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._orchestrator_chat import OrchestratorChat
|
||||
from common.types import TextMessage
|
||||
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
logging.getLogger("agnext").setLevel(logging.DEBUG)
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
"""
|
||||
The :mod:`agnext.chat` module is the concrete implementation of multi-agent interaction patterns
|
||||
"""
|
Loading…
Reference in New Issue