mirror of https://github.com/microsoft/autogen.git
Autogenstudio Update - Support for Anthropic/Mistral, Other Updates (#3439)
* [.Net] feature: Ollama integration (#2693) * [.Net] feature: Ollama integration with * [.Net] ollama agent improvements and reorganization * added ollama fact logic * [.Net] added ollama embeddings service * [.Net] Ollama embeddings integration * cleaned the agent and connector code * [.Net] cleaned ollama agent tests * [.Net] standardize api key fact ollama host variable * [.Net] fixed solution issue --------- Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> * [.Net] Fix #2687 by adding global:: keyword in generated code (#2689) * add tests * remove approved file * update * update approve file * update news (#2694) * update news * cleanup * [.Net] Set up Name field in OpenAIMessageConnector (#2662) * create OpenAI tests project * update * update * add tests * add mroe tests: * update comment * Update dotnet/src/AutoGen.OpenAI/Middleware/OpenAIChatRequestMessageConnector.cs Co-authored-by: David Luong <davidluong98@gmail.com> * Update AutoGen.OpenAI.Tests.csproj * fix build --------- Co-authored-by: David Luong <davidluong98@gmail.com> * Custom Runtime Logger <> FileLogger (#2596) * added logger param for custom logger support * added FileLogger * bump: spell check * bump: import error * added more log functionalites * bump: builtin logger for FileLogger * type check and instance level logger * tests added for the fileLogger * formatting bump * updated tests and removed time formatting * separate module for the filelogger * update file logger test * added the FileLogger into the notebook * bump json decode error * updated requested changes * Updated tests with AutoGen agents * bump file * bump: logger accessed before intializedsolved * Updated notebook to guide with a filename * added thread_id to the FileLogger * bump type check in tests * Updated thread_id for each log event * Updated thread_id for each log event * Updated with tempfile * bump: str cleanup * skipping-windows tests --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update groupchat.py to remove Optional type hint when they are not checked for None (#2703) * gpt40 tokens update (#2717) * [CAP] Improved AutoGen Agents support & Pip Install (#2711) * 1) Removed most framework sleeps 2) refactored connection code * pre-commit fixes * pre-commit * ignore protobuf files in pre-commit checks * Fix duplicate actor registration * refactor change * Nicer printing of Actors * 1) Report recv_multipart errors 4) Always send 4 parts * AutoGen generate_reply expects to wait indefinitely for an answer. CAP can wait a certain amount and give up. In order to reconcile the two, AutoGenConnector is set to wait indefinitely. * pre-commit formatting fixes * pre-commit format changes * don't check autogenerated proto py files * Iterating on CAP interface for AutoGen * User proxy must initiate chat * autogencap pypi package * added dependencies * serialize/deserialize dictionary elements to json when dealing with ReceiveReq * 1) Removed most framework sleeps 2) refactored connection code * Nicer printing of Actors * AutoGen generate_reply expects to wait indefinitely for an answer. CAP can wait a certain amount and give up. In order to reconcile the two, AutoGenConnector is set to wait indefinitely. * pre-commit formatting fixes * pre-commit format changes * Iterating on CAP interface for AutoGen * User proxy must initiate chat * autogencap pypi package * added dependencies * serialize/deserialize dictionary elements to json when dealing with ReceiveReq * pre-commit check fixes * fix pre-commit issues * Better encapsulation of logging * pre-commit fix * pip package update * [.Net] fix #2722 (#2723) * fix bug and add tests * update * [.Net] Mark Message as obsolete and add ToolCallAggregateMessage type (#2716) * make Message obsolete * add ToolCallAggregateMessage * update message.md * address comment * fix tests * set round to 1 temporarily * revert change * fix test * fix test * Update README.md (#2736) * Update human-in-the-loop.ipynb (#2724) * [CAP] Refactor: Better Names for classes and methods (#2734) * Bug fix * Refactor: Better class names, method names * pypi version * pre-commit fixes * Avoid requests 2.32.0 to fix build (#2761) * Avoid requests 2.32.0 to fix build * comment * quote * Debug: Gemini client was not logged and causing runtime error (#2749) * Debug: gemini client was not logged * Resolve docker issue in LMM test * Resolve comments --------- Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * [Add] Fix invoking Assistant API (#2751) Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * Add silent option in nested chats and group chat (#2712) * feat: respect silent request in nested chats and group chat * fix: address plugin test --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * fix openai compatible changes (#2718) Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * add warning if duplicate function is registered (#2159) * add warning if duplicate function is registereed * check _function_map and llm_config * check function_map and llm_config * use register_function and llm_config * cleanups * cleanups * warning test * warning test * more test coverage * use a fake config * formatting * formatting --------- Co-authored-by: Jason <jtoy@grids.local> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * Added ability to ignore the addition of the select speaker prompt for a group chat (#2726) Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * Update Deprecation Warning for `CompressibleAgent` and `TransformChatHistory` (#2685) * improved deprecation warnings * compressible_agent test fix * fix retrieve chat history test --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * added Gemini safety setting and Gemini generation config (#2429) * added Gemini safety setting and Gemini generation config * define params_mapping as a constant as a class variable * fixed formatting issues --------- Co-authored-by: nikolay tolstov <datatraxer@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * logger fix (#2659) Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * Ignore Some Messages When Transforming (#2661) * works * spelling * returned old docstring * add cache fix * spelling? --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * [.Net] rename Autogen.Ollama to AutoGen.Ollama and add more test cases to AutoGen.Ollama (#2772) * update test * add llava test * add more tests * rm Autogen.Ollama * add AutoGen.ollama * update * rename to temp * remove ollama * rename * update * rename * rename * update * [.Net] add AutoGen.SemanticKernel.Sample project (#2774) * add AutoGen.SemanticKernel.Sample * revert change * [.Net] add ollama-sample and adds more tests (#2776) * add ollama-sample and adds more tests * Update AutoGen.Ollama.Sample.csproj * Create JSON_mode_example.ipynb (#2554) * Create JSON_mode_example.ipynb * updated json example * added metadata to JSON notebook * fixed details in wrong metadata * Update JSON_mode_example.ipynb removed colab cell * fixed error * removed cell output * whitespace fixed I think its fixed? * finally fixed whitespace * Add packaging explicitly (#2780) * Introduce AnthropicClient and AnthropicClientAgent (#2769) * Reference project Revert "Set up the Agent. Basic Example set up, boilerplate for connector, ran into signing issue." This reverts commit 0afe04f2 End to end working anthropic agent + unit tests Set up the Agent. Basic Example set up, boilerplate for connector, ran into signing issue. * Add pragma warning * - Remove Message type - tabbing fix white space in csproj - Remove redundant inheritance - Edit Anthropic.Tests' rootnamespace - Create AutoGen.Anthropic.Samples * short-cut agent extension method * Pass system message in the constructor and throw if there's system message in Imessages --------- Co-authored-by: luongdavid <luongdavid@microsoft.com> * actions version update for the TransformMessages workflow (#2759) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * allow serialize_to_str to work with non ascii when dumping via json.dumps (#2714) Co-authored-by: Jason <jtoy@grids.local> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * PGVector Support for Custom Connection Object (#2566) * Added fixes and tests for basic auth format * User can provide their own connection object. Added test for it. * Updated instructions on how to use. Fully tested all 3 authentication methods successfully. * Get password from gitlab secrets. * Hide passwords. * Update notebook/agentchat_pgvector_RetrieveChat.ipynb Co-authored-by: Li Jiang <bnujli@gmail.com> * Hide passwords. * Added connection_string test. 3 tests total for auth. * Fixed quotes on db config params. No other changes found. * Ran notebook * Ran pre-commits and updated setup to include psycopg[binary] for windows and mac. * Corrected list extension. * Separate connection establishment function. Testing pending. * Fixed pgvectordb auth * Update agentchat_pgvector_RetrieveChat.ipynb Added autocommit=True in example * Rerun notebook --------- Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: Li Jiang <lijiang1@microsoft.com> * Remove duplicate project declared in AutoGen.sln (#2789) * remove duplicate project in AutoGen.sln * Add EndProject * [fix] file logger import (#2773) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * DBRX (Databricks LLM) example notebook (#2434) * Resolving test failures locally * Resolving test failures locally * Updates to website resources and docs, author * Adding image * Fixes to precommit and doc files for lfd * Fixing ruff exclusion of new notebook * Updates to support notebook rendering * Updates to support notebook rendering * Removing some results to try to fix docs render issue * pre-commit to standardize formatting --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> * Blogpost and news (#2790) * blog and news * update * economist * news update * bump version to 0.2.28 * link update * address comments * address comments * add quote * address comment * address comment * fix link * guidance * Update Getting-Started.mdx (#2781) Add missing os import Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Improve the error messge (#2785) * fix links and tags from databricks notebook (#2795) * fix type object 'ConversableAgent' has no attribute 'DEFAULT_summary_prompt' (#2788) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * print next speaker (#2800) * print next speaker * fix test error * [.Net] Release note for 0.0.14 (#2815) * update release note * update trigger * [.Net] Update website for AutoGen.SemanticKernel and AutoGen.Ollama (#2814) * update sk documents * add ollama doc * [CAP] User supplied threads for agents (#2812) * First pass: message loop in main thread * pypi version bump * Fix readme * Better example * Fixed docs * pre-commit fixes * Fix initialization of client in retrieve_docs() function (#2830) * fix typo and update news (#2825) * fix typo and update news * add link * update link * fix metadata * tag * Add llamaindex agent integration (#2831) * white spaces * add llamaindex agent wrapper for autogen * formatting * formatting fixes * add support for llamaindex agents * fix style * fix style * delete file * re-add file * fixes pre-commit errors * feat: Add agentchat_group_chat_with_llamaindex_agents notebook This commit adds the notebook "agentchat_group_chat_with_llamaindex_agents.ipynb" which demonstrates how to integrate Llamaindex agents into Autogen. The notebook includes code for setting up the API endpoint, creating Llamaindex agents, and setting up a group chat. * Refactor code * feat: Add test for LLamaIndexConversableAgent This commit adds a new test file `test_llamaindex_conversable_agent.py` that contains a test case for the `LLamaIndexConversableAgent` class. The test verifies the functionality of group chat with two MultimodalConversable Agents, limiting the chat by the `max_round` parameter. It also checks if the number of rounds does not exceed the maximum specified rounds. The purpose of this change is to ensure that the `LLamaIndexConversableAgent` behaves as expected and correctly handles group chats with limited rounds. Note: This commit includes import statements and setup code necessary for running the test case. * fix formatting * feat: Add LlamaIndexAgent job to GitHub Actions workflow This commit adds a new job called "LlamaIndexAgent" to the GitHub Actions workflow. The job runs on multiple operating systems (ubuntu-latest, macos-latest, windows-2019) and uses Python version 3.11. It sets up the Python environment, installs necessary packages and dependencies for LMM, performs coverage testing using pytest, and uploads the coverage report to Codecov. The commit also includes changes to the test_llamaindex_conversable_agent.py file. It imports os and sys modules, appends a path to sys.path, and adds skip conditions for tests based on certain conditions. These changes improve the CI/CD pipeline by adding a new job for LlamaIndexAgent and enhancing test conditions in test_llamaindex_conversable_agent.py. * fix test yaml * cleanup tests * fix test run * formatting * add test * fix yaml * pr feedback * add documentation to website * fixed style * edit to document page * newline * make skip reason easier to see * compose skip reasons * fix env variable name * refactor: Update package installation in contrib workflows - Replaced specific package installations with more general ones - Updated the installation of llama-index packages and dependencies - Added new packages for llama-index and Wikipedia tools * Update dependencies and add new agents for group chat - Update dependencies to specific versions - Add new agent `entertainent_specialist` for discovering entertainment opportunities in a location - Modify the `test_group_chat_with_llama_index_conversable_agent` function to include the new agent in the group chat * Update pydantic version requirement in setup.py - Update pydantic version requirement from "pydantic>=1.10,<3,!=2.6.0" to "pydantic>=1.10,<3" - Remove comment about issue with pydantic 2.6.0 * Refactor ChatMessage instantiation in _extract_message_and_history() - Refactored the instantiation of ChatMessage in the _extract_message_and_history() function. - Added an empty dictionary as additional_kwargs to the ChatMessage constructor. * Refactor test_llamaindex_conversable_agent.py - Removed unused import and variable - Updated OpenAI model to gpt-4 - Reduced max_iterations for location_specialist and entertainment_specialist from 30 to 5 - Changed human_input_mode for user_proxy to "NEVER" - Added assertion for max_rounds in entertainent_assistant These changes improve code efficiency and ensure proper functionality. * Remove entertainment_specialist agent and update user_proxy settings in test_llamaindex_conversable_agent.py - Remove the creation of entertainent_specialist agent - Update the max_consecutive_auto_reply setting for user_proxy to 10 - Update the default_auto_reply setting for user_proxy to "Thank you. TERMINATE" * Refactor installation of LlamaIndex packages and dependencies - Simplify installation commands for LlamaIndex packages - Remove specific version numbers from pip install commands * Update test_llamaindex_conversable_agent.py to include verbose output during pytest. - Add the -v flag to the pytest command in contrib-openai.yml. - Print a message when skipping the test due to missing dependencies or key. * Refactor OpenAI workflow, remove LlamaIndexAgent This commit removes the LlamaIndexAgent from the OpenAI workflow in order to streamline and simplify the code. The LlamaIndexAgent was no longer necessary and its removal improves overall code organization and maintainability. * feat: Add test for group chat functionality with LLamaIndexConversableAgent use mock reactagent in test * Update Dockerfile for devcontainer - Updated the Dockerfile for the devcontainer environment. - Added installation of build-essential, npm, git-lfs, and other packages. - Upgraded pip and installed pydoc-markdown, pyyaml, and colored libraries. * Update devcontainer.json with new VS Code extensions and terminal settings - Updated the list of VS Code extensions in devcontainer.json to include "GitHub.copilot" - Added a new terminal profile for Linux with the path set to "/bin/bash" - Set the default Linux terminal profile to "bash" * removeall * feat: Add Dockerfiles and devcontainer configurations This commit adds Dockerfiles and devcontainer configurations for different use cases in the `.devcontainer` directory. The following changes were made: - Added `Dockerfile` for basic setups (`base`) - Added `Dockerfile` for advanced features (`full`) - Added `Dockerfile` for AutoGen project developers (`dev`) - Added `Dockerfile` for AutoGen project developers using Studio (`studio`) - Updated existing files with necessary dependencies and configurations - Modified README.md to provide instructions on customizing Dockerfiles and managing the Docker environment These changes allow users to easily set up their AutoGen development environment using Docker containers. * delete * Add authors.yml file with author information This commit adds the authors.yml file, which contains information about various authors contributing to the project. Each author entry includes their name, title, URL, and image URL. This file will be used to display author information on the website. * delete * Add test cases for agent chat functionality This commit adds new test cases for the agent chat functionality. The test cases include scenarios such as auto feedback from code execution, function calls, currency calculator, async function calls, group chat finite state machine, cost token tracking, and group chat state flow. These test cases cover different versions of Python (3.10, 3.11, and 3.12) and are skipped if OpenAI is not installed or the Python version does not match. * delete * feat: Add LLM configuration documentation This commit adds documentation for configuring an agent's access to LLMs. It includes information on the `llm_config` argument, `config_list`, and other configuration parameters. The commit also provides examples of filtering the `config_list` based on model names and tags. Additionally, it demonstrates how to add an HTTP client in `llm_config` for proxy usage. Finally, it mentions helper functions for loading a config list from API keys, environment variables, files, or `.env` files. Closes #1234 * delete * feat: Add LLM configuration documentation This commit adds documentation for configuring an agent's access to LLMs. It includes information on the `llm_config` argument, `config_list`, and other configuration parameters. The commit also provides examples of filtering the `config_list` based on model names and tags. Additionally, it demonstrates how to add an HTTP client in `llm_config` for proxy usage. Finally, it mentions helper functions for loading a config list from various sources. Closes #1234 * delete * adding back notebooks * reset * feat: Add setup.py for package installation This commit adds a new file, `setup.py`, which is used for installing the package. The `setup.py` file includes information such as the author, description, and dependencies of the package. This allows users to easily install and use the package in their projects. The `setup.py` file also includes different extra requirements for specific functionalities, such as retrieving chat data or running Jupyter notebooks. These extra requirements are installed when specified during installation. Overall, this addition improves the usability and installation process of the package. * Broken links fix (#2843) * Update Examples.md * Update agent_chat.md * Update agent_chat.md * Update Optional-Dependencies.md * Update JSON_mode_example.ipynb * Update JSON_mode_example.ipynb * Update JSON_mode_example.ipynb * Update JSON_mode_example.ipynb * Update agentchat_agentoptimizer.ipynb * Update agentchat_nested_chats_chess.ipynb * update guide about roadmap issues (#2846) * update guide about roadmap issues * update link * Fix chromadb get_collection ignores custom embedding_function (#2854) * Use Gemini without API key (#2805) * google default auth and svc keyfile for Gemini * [.Net] Release note for 0.0.14 (#2815) * update release note * update trigger * [.Net] Update website for AutoGen.SemanticKernel and AutoGen.Ollama (#2814) support vertex ai compute region * [CAP] User supplied threads for agents (#2812) * First pass: message loop in main thread * pypi version bump * Fix readme * Better example * Fixed docs * pre-commit fixes * refactoring, minor fixes, update gemini demo ipynb * add new deps again and reset line endings * Docstring for the init function. Use private methods * improve docstring --------- Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> Co-authored-by: Rajan <rajan.chari@yahoo.com> Co-authored-by: Zoltan Lux <z.lux@campus.tu-berlin.de> * Refactor hook registration and processing methods (#2853) * Refactor hook registration and processing methods - Refactored the `hook_lists` dictionary to use type hints for better readability. - Updated the `register_hook` method signature to include type hints for the `hook` parameter. - Added type hints to the `process_last_received_message` method parameters and return value. This commit refactors the code related to hook registration and processing in the `conversable_agent.py` file. The changes improve code readability and maintainability by using type hints and updating method signatures. * Refactor hook_lists initialization and add type hints - Refactored the initialization of `hook_lists` to use a colon instead of an equal sign. - Added type hints for the parameters and return types of `process_last_received_message` method. * Refactor hook registration and processing in conversable_agent.py - Refactored the `hook_lists` dictionary to use a more generic type for the list of hooks. - Updated the signature check for `process_message_before_send`, `process_all_messages_before_reply`, and `process_last_received_message` hooks to ensure they are callable with the correct signatures. - Added error handling to raise a ValueError or TypeError if any hook does not have the expected signature. * Refactor hook processing in conversable_agent.py - Simplify the code by removing unnecessary type checks and error handling. - Consolidate the logic for processing hooks in `_process_message_before_send`, `process_all_messages_before_reply`, and `process_last_received_message` methods. * Refactor register_hook method signature for flexibility The commit changes the signature of the `register_hook` method in `conversable_agent.py`. The second argument, `hook`, is now of type `Callable` instead of `Callable[[List[Dict]], List[Dict]]`. This change allows for more flexibility when registering hooks. * [.Net] Add AOT compatible check for AutoGen.Core (#2858) * add AutoGen.AotCompatibility test * add aot test * fix build error * update ps1 path * Updated the azure client to support AAD auth. (#2879) * add github icon (#2878) * [Refactor] Transforms Utils (#2863) * wip * tests + docstrings * improves tests * fix import * allow function to remove termination string in groupchat (#2804) * allow function to remove termination string in groupchat * improve docstring Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com> * improve docstring Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com> * improve test case description Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com> --------- Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com> * AgentOps Runtime Logging Implementation (#2682) * add agentops req * track conversable agents with agentops * track tool usage * track message sending * remove record from parent * remove record * simple example * notebook example * remove spacing change * optional dependency * documentation * remove extra import * optional import * record if agentops * if agentops * wrap function auto name * install agentops before notebook test * documentation fixes * notebook metadata * notebook metadata * pre-commit hook changes * doc link fixes * git lfs * autogen tag * bump agentops version * log tool events * notebook fixes * docs * formatting * Updated ecosystem manual * Update notebook for clarity * cleaned up notebook * updated precommit recommendations * Fixed links to screenshots and examples * removed unused files * changed notebook hyperlink * update docusaurus link path * reverted setup.py * change setup again * undo changes * revert conversable agent * removed file not in branch * Updated notebook to look nicer * change letter * revert setup * revert setup again * change ref link * change reflink * remove optional dependency * removed duplicated section * Addressed clarity commetns from howard * minor updates to wording * formatting and pr fixes * added info markdown cell * better docs * notebook * observability docs * pre-commit fixes * example images in notebook * example images in docs * example images in docs * delete agentops ong * doc updates * docs updates * docs updates * use agent as extra_kwarg * add logging tests * pass function properly * create table * dummy function name * log chat completion source name * safe serialize * test fixes * formatting * type checks --------- Co-authored-by: reibs <areibman@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> Co-authored-by: Howard Gil <howardbgil@gmail.com> Co-authored-by: Alex Reibman <meta.alex.r@gmail.com> * Autogenstudio docs (#2890) * add autogenstudio docs * update ags readme to point to docs page * update docs * update docs * update faqs * update, fix typos * [.Net] Add Goolge gemini (#2868) * update * add vertex gemini test * remove DTO * add test for vertexGeminiAgent * update test name * update IGeminiClient interface * add test for streaming * add message connector * add gemini message extension * add tests * update * add gemnini sample * update examples * add test for iamge * fix test * add more tests * add streaming message test * add comment * remove unused json * implement google gemini client * update * fix comment * Squash changes (#2849) * version update (#2908) * version update * version update * Bugfix: PGVector/RAG - Calculate the Vector Size based on Model Dimensions (#2865) * Calculate the dimension size based off model chosen. * Added example docstring. * Validated working notebook with sentence models of different dimensions. * Validated removal of model_name working. * Second example uses conn object. * embedding_function no longer directly references .encode * Fixed pre-commit issue. * Use try/except to raise error when shape is not found in embedding function. * Re-ran notebook. * Update autogen/agentchat/contrib/vectordb/pgvectordb.py Co-authored-by: Li Jiang <bnujli@gmail.com> * Update autogen/agentchat/contrib/vectordb/pgvectordb.py Co-authored-by: Li Jiang <bnujli@gmail.com> * Added .encode * Removed example comment. * Fix overwrite doesn't work with existing collection when custom embedding function has different dimension from default one --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Update notebook (#2886) * Change chunk size of vectordb from max_tokens to chunk_token_size (#2896) * Update retrieve_user_proxy_agent.py * Update retrieve_user_proxy_agent.py --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * CRLF changed to LF (#2915) * pre-commit version update and a few spelling fixes (#2913) * Improve update context condition checking rule (#2883) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Docs typo cli-code-executor.ipynb (#2909) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * human input mode annotations fixed (#2864) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.Net] Add Gemini samples to AutoGen.Net website + configure Gemini package to be ready for release (#2917) * update website * fix buid error * update * changed CRLF to LF (#2935) * Bump braces from 3.0.2 to 3.0.3 in /website (#2934) Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3. - [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md) - [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3) --- updated-dependencies: - dependency-name: braces dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * update update.md (#2937) * Allow passing in custom pricing in config_list (#2902) * update * update * TODO comment removed * update --------- Co-authored-by: Yiran Wu <32823396+kevin666aa@users.noreply.github.com> Co-authored-by: Davor Runje <davor@airt.ai> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update OAI_CONFIG_LIST_sample (#2867) Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Fix repeated comma typo (#2940) * [.Net] update oai tests by using new OpenAI resources (#2939) * update oai tests * Update MetaInfo.props * [Autobuild] improve robustness and reduce cost (#2907) * Update Autobuild. * merge main into autobuild * update test for new autobuild * update author info * fix pre-commit * Update autobuild notebook * Update autobuild_agent_library.ipynb * Update autobuild_agent_library.ipynb * Fix pre-commit failures. --------- Co-authored-by: Linxin Song <rm.social.song1@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Filter models with tags instead of model name (#2912) * identify model with tags instead of model name * models * model to tag * add more model name * format * Update test/agentchat/test_function_call.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update test/agentchat/test_function_call.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update test/agentchat/test_tool_calls.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update test/agentchat/test_tool_calls.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * remove uncessary tags * use gpt-4 as tag * model to tag * add tag for teachable agent test --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: AutoGen-Hub <flaml20201204@gmail.com> * Fix missing messages in Gemini history (#2906) * fix missing message in history * fix message handling * add list of Parts to Content object * add test for gemini message conversion function * add test for gemini message conversion * add message to asserts * add safety setting support for vertexai * remove vertexai safety settings * Client class utilities (#2949) * Addition of client utilities, initially for parameter validation * Corrected test * update: type checks and few tests * fix: docs, tests --------- Co-authored-by: Hk669 <hrushi669@gmail.com> * change specified api-version (#2955) * Update agentchat_function_call_currency_calculator.ipynb (#2952) minor fix * Bump ws from 7.5.9 to 7.5.10 in /website (#2964) Bumps [ws](https://github.com/websockets/ws) from 7.5.9 to 7.5.10. - [Release notes](https://github.com/websockets/ws/releases) - [Commits](https://github.com/websockets/ws/compare/7.5.9...7.5.10) --- updated-dependencies: - dependency-name: ws dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * should_hide_tools function added to client_utils (#2966) * Anthropic Client (#2931) * intial setup for the anthropic client with cost config * update: other methods added * fix: formatting * fix: config unused * update: changes made in the client * update: test added to the workflow * update: added tests to the anthropic client * fix: errors in workflows and client * fix * fix: anthropic tools type * update: notebook anthropic * Nonetype fixed * fix-tests config * update: tests and client issues * logger support * remove sys path * updated the functioning of the client * update: type hints and stream * skip tests- importerror * fix: anthropic client and tests * none fix * Alternating roles, parameter keywords, cost on response, * update: anthropic notebook * update: notebook with more details * devcontainer * update: added validate_params from the client_utils * fix: formatting * fix: minor comment --------- Co-authored-by: Mark Sze <mark@sze.family> * a_initaite_chats update (#2958) * Fix #2845 - LocalCommandLineCodeExecutor is not working with virtual environments (#2926) * Used absolute path of virtual environment bin path in local command executors * Checked if the expected venv is used or not * Added code comments for documentation * fix: format issue - shutil lib --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Li Jiang <bnujli@gmail.com> * type fix for ChatResult (#2973) * Fix #2960 by checking if the values are a list of lists. (#2971) * Fix #2960 by checking values are list of list * Reduce dictionary look up overhead * [.Net] fix #2859 (#2974) * add getting start sample project * update * update * revert change * [.Net] add ReAct sample (#2977) * add ReAct sample * fix source geenrator test * Mistral Client (#2892) * Initial commit of Mistral client class * Updated to manage final system message for reflection_with_llm * Add Mistral support to client class * Add Mistral support across the board (based on Gemini changes) * Test file for Mistral client * Updated handling of config, added notebook for documentation * Added support for additional API parameters * Remove unneeded code, updated exception raising * Updated handling of keywords, including type checks, defaults, warnings. Updated notebook example to remove logging warnings. * Added class description. * Updated tests to support new config handling. * Moved parameter parsing to create function, minimised init, added parameter tests * Refined parameter validation * Correct spacing * Fixed string concat in parameter validation * Corrected upper/lower bound warning * Use client_tools, tidy up Mistral create, better handle tool call response, tidy tests * Update of documentation notebook, replacement of old version * Update to handle multiple tool_call recommendations in a message * Updated tests to accommodate multiple tool_calls as well as content in message * Update autogen/oai/mistral.py comment Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * cleanup, rewrite mock * update --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: kevin666aa <yrwu000627@gmail.com> * Fix qdrant version (#2984) * Anthropic client fixes (#2981) * add claude 3.5 sonnet to pricing * Fix import error for client_utils * fix import order for ruff formatter * name key is not supported in anthropic message so let's remove it * Improved tool use message conversion, changed create to return standard response * Converted tools to messages for speaker selection, moved message conversion to function, corrected bugs * Minor bracket typo. * Renaming function * add groupchat and run notebook --------- Co-authored-by: Mark Sze <mark@sze.family> Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Together AI Client (#2919) * First pass together.ai client class * Config handling, models and cost * Added tests, moved param management to create function * Tests, parameter, validation, logging updates * Added use of client_utils PR 2949 * Updated to return OAI response * Notebook example * Improved function calling, updated tests, updated notebook with Chess example * Tidied up together client class, better parameter handling, simpler exception capture, warning for no cost, reuse in tests, cleaner tests * Update of documentation notebook, replacement of old version * Fix of messages parameter for hide_tools function call * Update autogen/oai/together.py Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * Update together.py to fix text --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: Yiran Wu <32823396+yiranwu0@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Uniform Interface for calling different LLMs (#2916) * update * update * Minor tweaks on text --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * fix: created in ChatCompletion for clients (#2988) * Bump version to 0.2.30 (#2990) * update notebook wording and format (#2991) * Fixed alternating message role bug in Anthropic client (#2992) * Fixed alternating message role bug * Fix bug * Message handling to support multiple function calls (#2997) * LLM Observability documentation fixes: Broken links, grammar, and spelling (#2995) * update markdown hyperlinks to stable urls * update notebook images and text * re-write observability section * Updated section * update wording * added newline * update styling in image tags to be jsx compatible * added text * update link * simplified text --------- Co-authored-by: Braelyn Boynton <bboynton97@gmail.com> * bump version (#2999) Co-authored-by: Li Jiang <bnujli@gmail.com> * Improve doc in tutorial/conversation-patterns and customized_speaker_selection (#3006) * update * update --------- Co-authored-by: Yiran Wu <32823396+kevin666aa@users.noreply.github.com> * [.Net] Update website with Tutorial section (#2982) * update * Update -> Releaes Notes * add ImageChat * update * update * fix #2975 (#3012) * AgentEval Blogpost (#2954) * first draft of agent eval blog post * adding NextSteps section * Update website/blog/2024-06-21-AgentEval/index.mdx Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update website/blog/2024-06-21-AgentEval/index.mdx Co-authored-by: Chi Wang <wang.chi@microsoft.com> * addressing some pr comments * fixing whitespace * fixing typo * adding bit about sequential chats * fixing whitespace * adding more about verifier --------- Co-authored-by: Beibin Li <BeibinLi@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * improve `Create agent with tools` and add tuturial reference in index.md (#3024) * #2708 add Add a judgment to the graph constructor (#2709) * #2708 add Add a judgment to the graph constructor * #2708 add Add a judgment to the graph constructor & added unit test * #2708 #2079 move GraphTests to AutoGen.Tests; delete AutoGen.Core.Tests project * [.Net] add sample on how to make function call using lite llm and ollama Plus move ollama openai sample to AutoGen.OpenAI.Sample project (#3015) * add sample * Update Connect_To_Ollama.cs * Update Connect_To_Ollama.cs * Create azure_cosmos_db in ecosystems.md (#2371) * Create azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * Update azure_cosmos_db.md * fix log_function_use warning (#3018) * Groq Client (#3003) * Groq Client Class - main class and setup, except tests * Change pricing per K, added tests * Streaming support, including with tool calling * Used Groq retries instead of loop, thanks Gal-Gilor! * Fixed bug when using logging. --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * [.Net] fix #3014 by adding local model function call in dotnet website (#3044) * add instruction in ollama-litellm function call example * add tutorial * fix tests * Update README.md (#3025) adding links to blogposts to increase the clarity * [.Net] Support tools for AnthropicClient and AnthropicAgent (#2944) * Squash commits : support anthropic tools * Support tool_choice * Remove reference from TypeSafeFunctionCallCodeSnippet.cs and add own function in test proj * [.Net] Fix #3045 (#3047) * make IStreamingMessage obsolete * update final reply message * Fix llama_index tests (#3063) * Update qdrant dependency (#3064) * Update qdrant dependency * Update qdrant dependency * Fix simple typos in human-in-the-loop.ipynb (#3051) * update readme (#3057) * update readme * Update README.md Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * add notion link --------- Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Blog post for enhanced non-OpenAI model support (#2965) * Blogpost for enhanced non-OpenAI model support * update: quickstart with simple conversation * update: authors details * Added upfront text * Added function calling, refined text. Added chess for alt-models notebook, updated examples listing. * Added Groq to blog * Removed acknowledgements --------- Co-authored-by: Hk669 <hrushi669@gmail.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Fix simple typo in chat-termination.ipynb (#3050) * Cohere Client (#3004) * initial setup for cohere client * client update * changes: ClintType added to the utils * Revert "changes: ClintType added to the utils" This reverts commit80d6155228
. * Message conversion to Cohere, Parameter handling, cost calculation, streaming, tool calling * Changed Groq references. * minor fix * tests added * ref fix * added in the workflows * Fixed bug on non-streaming text generation * fix: formatting * Support Cohere rule for last message not USER when tool_results exist * Added Cohere to documentation * fixed client.py merge, removed unnecessary comments in groq.py, updated Cohere documentation, added Groq documentation * log: ignored params * update: custom exception added --------- Co-authored-by: Mark Sze <mark@sze.family> Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * bump version (#3073) * Update azure_cosmos_db.md (#3043) * Update AutoTX Link on Gallery.json (#3082) * fix: support openai service account apikey format (#3078) * [.Net] Update FunctionCallTemplate.tt to encode `"` (#3068) * Update FunctionCallTemplate.tt changed the desscription assigning to handle double quotes in comments and prevent the generated code from breaking. * Added the necessary changes Fixed handling of double quotes in descriptions within FunctionCallTemplate.tt. Standardized newline characters to ensure consistency. Updated test cases in FunctionCallTemplateEncodingTests to verify correct encoding of double quotes in descriptions. Cleaned up unnecessary using directives in FunctionCallTemplateEncodingTests. Aligned the template with the approved test output. * test cases passing Test cases passing like `Starting test execution, please wait... A total of 1 test files matched the specified pattern. Passed! - Failed: 0, Passed: 9, Skipped: 0, Total: 9, Duration: 66 ms - AutoGen.SourceGenerator.Tests.dll (net8.0)` * Delete FunctionCallTemplateTests.TestFunctionCallTemplate.approved.txt Deleted the ApprovalTests/FunctionCallTemplateTests.TestFunctionCallTemplate.approved.txt successfully! * Revert "Delete FunctionCallTemplateTests.TestFunctionCallTemplate.approved.txt" This reverts commit7a6ea9cf0d
. --------- Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> * Demo Notebook for Using Gemini with VertexAI (#3032) * add notebook for using Gemini with VertexAI * add missing image * remove part with workload identity federation * Spelling * Capitalisation and tweak on config note. * autogen gemini gcp image * fix formatting * move gemini vertexai notebook to website/docs/topics/non-openai-models * Adjust license Co-authored-by: Chi Wang <wang.chi@microsoft.com> * remove auto-generated cell --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.Net] fix #2695 and #2884 (#3069) * add round robin orchestrator * add constructor for orchestrators * add tests * revert change * return single orchestrator * address comment * [.Net] Agent as service: Run an `IAgent` as openai chat completion endpoint (#2633) * update * add test * clean up * update * Delete dotnet/src/AutoGen.Server/AutoGen.Service.csproj.user * implement streaming * add sample project * rename AutoGen.Service to AutoGen.WebAPI * rename AutoGen.Service to AutoGen.WebAPI * add stateflow to related papers (#3108) * [.Net] Prepare release note for AutoGen.Net 0.0.16 (#3117) * add release note * update repo info * fix notebook (#3093) * middleware examples updated to return modified message passing assertion. modified the default agent reply so that it is different from the user's prompt (#3128) * feat: Qdrant support for the VectorDB interface (#3035) * feat: Qdrant support * chore: pre-defined vector db * Fix issues --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Fix websurfer test error (#3138) * Fix assertion error * Update triggers * [.Net] update sk version from 1.10.0 to 1.15.1 (#3131) * update sk version * fix sk test error * add cancellation token to transition check lambda (#3132) * fix build and tests (#3134) * [.Net] update dotnet-ci and dotnet-release to use 8.0.x version when setting up .NET. And enable format check (#3136) * use 8.0.x versin * enable format check * change file header * apply code format * add instructions in ci to fix format error * add comment back * update (#3144) (#3145) * Update qdrant notebook for new qdrant vectordb (#3140) * Add qdrant notebook, rename notebooks * Revert changes of pgvector notebook * Fix assertion error * Fixed a typo in tool-use.ipynb (#3151) Fixed a typo in tool-use.ipynb: comaptible -> compatible * Add Agentok into gallery (#3148) * docs: Added Agentok into gallery. * Fixed the format issue * Track agentok.png with Git LFS --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * Fix typo in agentchat_nestedchat.ipynb (#3139) * Update JSON_mode_example.ipynb (#3130) Improve minor mistakes in documentation * Fix docstring (#3172) * add streaming tool call example (#3167) * Added anthropic bedrock (#3103) * Added anthropic bedrock * Code format and fixed import * Added tests for anthropic bedrock * tests update --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Update token_count_utils.py - Replace `print` with `logger.warning` for consistency (#3168) The code was using both `logger.warning` and `print` for showing warning. This commit fixes this inconsistency which can be an issue on production environments / logging systems * fix: update method name in GeminiClient (#3007) - change from `_initialize_vartexai` to `_initialize_vertexai` Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * add Use AutoGen.Net agent as model in AG Studio (#3182) * add Use AutoGen.Net agent as model in AG Studio * add git lfs * test * dotnet/nuget/icon.png,dotnet/resource/images/background.png,dotnet/resource/images/square.png,dotnet/test/AutoGen.Anthropic.Tests/images/square.png,dotnet/test/AutoGen.Ollama.Tests/images/image.png,dotnet/test/AutoGen.Ollama.Tests/images/square.png,dotnet/test/AutoGen.Tests/ApprovalTests/square.png,dotnet/website/images/articles/CreateAgentWithTools/single-turn-tool-call-with-auto-invoke.png,dotnet/website/images/articles/CreateAgentWithTools/single-turn-tool-call-without-auto-invoke.png,dotnet/website/images/articles/CreateUserProxyAgent/image-1.png,dotnet/website/images/articles/PrintMessageMiddleware/printMessage.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/FinalStepsB.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/FinalStepsC.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/Step5.2OpenAIModel.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/Step5.3ModelNameAndURL.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/Step6.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/Step6b.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/Terminal.png,dotnet/website/images/articles/UseAutoGenAsModelinAGStudio/TheModelTab.png,notebook/friendly_and_suspicous.jpg,notebook/nested-chats-chess.png,notebook/nested_chat_1.png,notebook/nested_chat_2.png,notebook/optiGuide_new_design.png,notebook/viz_gc.png,samples/apps/auto-anny/images/icon.png,samples/apps/autogen-studio/docs/ara_stockprices.png,samples/apps/autogen-studio/frontend/src/images/icon.png,test/test_files/test_image.png,website/blog/2023-04-21-LLM-tuning-math/img/level2algebra.png,website/blog/2023-04-21-LLM-tuning-math/img/level3algebra.png,website/blog/2023-04-21-LLM-tuning-math/img/level4algebra.png,website/blog/2023-04-21-LLM-tuning-math/img/level5algebra.png,website/blog/2023-05-18-GPT-adaptive-humaneval/img/design.png,website/blog/2023-05-18-GPT-adaptive-humaneval/img/humaneval.png,website/blog/2023-06-28-MathChat/img/mathchatflow.png,website/blog/2023-06-28-MathChat/img/result.png,website/blog/2023-10-18-RetrieveChat/img/retrievechat-arch.png,website/blog/2023-10-26-TeachableAgent/img/teachable-arch.png,website/blog/2023-11-06-LMM-Agent/img/teaser.png,website/blog/2023-11-09-EcoAssistant/img/chat.png,website/blog/2023-11-09-EcoAssistant/img/results.png,website/blog/2023-11-09-EcoAssistant/img/system.png,website/blog/2023-11-09-EcoAssistant/img/template-demo.png,website/blog/2023-11-09-EcoAssistant/img/template.png,website/blog/2023-11-13-OAI-assistants/img/teaser.jpg,website/blog/2023-11-20-AgentEval/img/agenteval-CQ.png,website/blog/2023-11-20-AgentEval/img/math-problems-plot.png,website/blog/2023-11-20-AgentEval/img/tasks-taxonomy.png,website/blog/2023-11-26-Agent-AutoBuild/img/agent_autobuild.png,website/blog/2023-12-01-AutoGenStudio/img/autogenstudio_config.png,website/blog/2023-12-01-AutoGenStudio/img/autogenstudio_home.png,website/blog/2023-12-01-AutoGenStudio/img/autogenstudio_skills.png,website/blog/2023-12-23-AgentOptimizer/img/agentoptimizer.png,website/blog/2024-01-25-AutoGenBench/img/teaser.jpg,website/blog/2024-02-02-AutoAnny/img/AutoAnnyLogo.jpg,website/blog/2024-02-11-FSM-GroupChat/img/FSM_logic.png,website/blog/2024-02-11-FSM-GroupChat/img/FSM_of_multi-agents.png,website/blog/2024-02-11-FSM-GroupChat/img/teaser.jpg,website/blog/2024-02-29-StateFlow/img/alfworld.png,website/blog/2024-02-29-StateFlow/img/bash_result.png,website/blog/2024-02-29-StateFlow/img/intercode.png,website/blog/2024-02-29-StateFlow/img/sf_example_1.png,website/blog/2024-03-03-AutoGen-Update/img/contributors.png,website/blog/2024-03-03-AutoGen-Update/img/dalle_gpt4v.png,website/blog/2024-03-03-AutoGen-Update/img/gaia.png,website/blog/2024-03-03-AutoGen-Update/img/love.png,website/blog/2024-03-03-AutoGen-Update/img/teach.png,website/blog/2024-03-11-AutoDefense/imgs/architecture.png,website/blog/2024-03-11-AutoDefense/imgs/defense-agency-design.png,website/blog/2024-03-11-AutoDefense/imgs/table-4agents.png,website/blog/2024-03-11-AutoDefense/imgs/table-agents.png,website/blog/2024-03-11-AutoDefense/imgs/table-compared-methods.png,website/blog/2024-05-24-Agent/img/agents.png,website/blog/2024-05-24-Agent/img/leadership.png,website/blog/2024-06-21-AgentEval/img/agenteval_ov_v3.png,website/blog/2024-06-24-AltModels-Classes/img/agentstogether.jpeg,website/docs/Use-Cases/images/agent_example.png,website/docs/Use-Cases/images/app.png,website/docs/Use-Cases/images/autogen_agents.png,website/docs/autogen-studio/img/agent_assistant.png,website/docs/autogen-studio/img/agent_groupchat.png,website/docs/autogen-studio/img/agent_new.png,website/docs/autogen-studio/img/agent_skillsmodel.png,website/docs/autogen-studio/img/ara_stockprices.png,website/docs/autogen-studio/img/model_new.png,website/docs/autogen-studio/img/model_openai.png,website/docs/autogen-studio/img/skill.png,website/docs/autogen-studio/img/workflow_chat.png,website/docs/autogen-studio/img/workflow_export.png,website/docs/autogen-studio/img/workflow_new.png,website/docs/autogen-studio/img/workflow_profile.png,website/docs/autogen-studio/img/workflow_sequential.png,website/docs/autogen-studio/img/workflow_test.png,website/docs/ecosystem/img/ecosystem-composio.png,website/docs/ecosystem/img/ecosystem-databricks.png,website/docs/ecosystem/img/ecosystem-fabric.png,website/docs/ecosystem/img/ecosystem-llamaindex.png,website/docs/ecosystem/img/ecosystem-memgpt.png,website/docs/ecosystem/img/ecosystem-ollama.png,website/docs/ecosystem/img/ecosystem-promptflow.png,website/docs/topics/non-openai-models/images/cloudlocalproxy.png,website/docs/tutorial/assets/code-execution-in-conversation.png,website/docs/tutorial/assets/code-executor-docker.png,website/docs/tutorial/assets/code-executor-no-docker.png,website/docs/tutorial/assets/conversable-agent.jpg,website/docs/tutorial/assets/group-chat.png,website/docs/tutorial/assets/human-in-the-loop.png,website/docs/tutorial/assets/nested-chats.png,website/docs/tutorial/assets/sequential-two-agent-chat.png,website/docs/tutorial/assets/two-agent-chat.png,website/static/img/autogen_agentchat.png,website/static/img/autogen_app.png,website/static/img/chat_example.png,website/static/img/create_gcp_svc.png,website/static/img/gallery/TensionCode.png,website/static/img/gallery/autotx.png,website/static/img/gallery/composio-autogen.png,website/static/img/gallery/default.png,website/static/img/gallery/robot.jpg,website/static/img/gallery/webagent.jpg,website/static/img/gallery/x-force-ide-ui.png: convert to Git LFS * update (#3175) Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * [.Net] Allow passing a kernel to Interactive Service. (#3183) * accept a running kernel for Interactive Service * add kernel running check * rename Service -> WebAPI (#3177) * Enhance vertexai integration (#3086) * switch to officially supported Vertex AI message sending + safety setting converion for vertexai * add system instructions * switch to officially supported Vertex AI message sending + safety setting converion for vertexai * fix bug in safety settings conversion * add missing system instructions * add safety settings to send message * add support for credentials objects * add type checkingchange project_id to project arg * add more tests * fix mock creation in test * extend docstring * fix errors with gemini message format in chats * add option for vertexai response validation setting & improve docstring * readding empty message handling * add more tests * extend and improve gemini vertexai jupyter notebook * rename project arg to project_id and GOOGLE_API_KEY env var to GOOGLE_GEMINI_API_KEY * adjust docstring formatting * [.Net] Add a constructor which takes ChatCompletionOptions for OpenAIChatAgent (#3170) * accept ChatCompletionOptions in constrcutor * fix comment * [CAP] Convenience methods for protobuf and some minor refactoring (#3022) * First pass: message loop in main thread * pypi version bump * Fix readme * Better example * Fixed docs * pre-commit fixes * Convenience methods for protobufs * support non-color consoles * Non-color console and allow user input * Minor update to single_threaded_demo * new pypi version * pre-commit fixes * change pypi name --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * [CAP] Address missed PR comment changes (Minor) (#3201) * Address PR comments * Address PR comments * [.Net] fix #3203 (#3204) * add net6 & net8 * update * add tools and stop sequence * Fix typo in agentchat_society_of_mind.ipynb (#3180) Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * Fix Anthropic Bedrock support (#3210) * Added _configure_openai_config_for_bedrock to include aws variables in openai_config, necessary for setting AnthropicBedrock as client. * Removed aws_session_token from required_keys * Removed check for aws_session_token * Removed all checks for aws_session_token * Ran pre-commit --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Resolve arguments formatting (#3194) Fixed formatting for "clear_history" Co-authored-by: Chi Wang <wang.chi@microsoft.com> * +mdb atlas vectordb [clean_final] (#3000) * +mdb atlas * Update test/agentchat/contrib/vectordb/test_mongodb.py Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * update test_mongodb.py; we dont need to do the assert .collection_name vs .name * Try fix mongodb service * Try fix mongodb service * Update username and password * Update autogen/agentchat/contrib/vectordb/mongodb.py * closer --- but im not super thrilled about the solution... * PYTHON-4506 Expanded tests and simplified vector search pipelines * Update mongodb.py * Update mongodb.py - Casey * search_index_magic index_name change; keeping track of lucene indexes is tricky * Fix format * Fix tests * hacking trying to figure this out * Streamline checks for indexes in construction and restructure tests * Add tests for score_threshold, embedding inclusion, and multiple query tests * refactored create_collection to meet base object requirements * lint * change the localhost port to 27017 * add test to check that no embedding is there unless explicitly provided * Update logger * Add test get docs with ids=None * Rename and update notebook * have index management include waiting behaviors * Adds further optional waits or users and tests. Cleans up upsert. * ensure the embedding size for multiple embedding inputs is equal to dimensions * fix up tests and add configuration to ensure documents and indexes are READY for querying * fix import failure * adjust typing for 3.9 * fix up the notebook output * changed language to communicate time taken on first init_chat call * replace environment variable usage --------- Co-authored-by: Fabian Valle <fabian.valle-simmons@mongodb.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: Casey Clements <casey.clements@mongodb.com> Co-authored-by: Jib <jib.adegunloye@mongodb.com> Co-authored-by: Jib <Jibzade@gmail.com> Co-authored-by: Cozypet <yanhan860711@gmail.com> * avoid scan tool false alarm (#3218) Co-authored-by: gongwn1 <gongwn1@lenovo.com> * Fix failing GitGuardian check (#3228) * Agent Observability Blog Post (#3209) * update markdown hyperlinks to stable urls * update notebook images and text * re-write observability section * Updated section * update wording * added newline * update styling in image tags to be jsx compatible * added text * update link * simplified text * created blog * replace flow images with fewer shadows * reformat line * add authors * updated discord link and direct paths to image URLS * removed images since they are not stored in the AgentOps github * remove trailing whitespaces * removed newline * removed whitespace * Update website/blog/2024-07-25-AgentOps/index.mdx Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * single quotes with double quotes --------- Co-authored-by: Braelyn Boynton <bboynton97@gmail.com> Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * Fix ConversableAgent break link in agent_chat.md file to include the .md extension in the link for ConversableAgent (#3221) ConversableAgent has a break link in website/docs/Use-Cases/agent_chat.md file * update input prompt message (#3149) Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * Add gpt-4o-mini to model list (#3169) * Add gpt-4o-mini to model list * Fix formatting issue and verify with pre-commit * Remove extra space * Minor change to make pre-commit (formatting checks) pass --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> Co-authored-by: Ian <ArGregoryIan@gmail.com> * Observability blog post styling hot fix (#3234) * update markdown hyperlinks to stable urls * update notebook images and text * re-write observability section * Updated section * update wording * added newline * update styling in image tags to be jsx compatible * added text * update link * simplified text * created blog * replace flow images with fewer shadows * reformat line * add authors * updated discord link and direct paths to image URLS * removed images since they are not stored in the AgentOps github * remove trailing whitespaces * removed newline * removed whitespace * Update website/blog/2024-07-25-AgentOps/index.mdx Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * single quotes with double quotes * fix widths --------- Co-authored-by: Braelyn Boynton <bboynton97@gmail.com> Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * bump version (#3231) * bump version * update * format --------- Co-authored-by: kevin666aa <yrwu000627@gmail.com> Co-authored-by: Yiran Wu <32823396+yiranwu0@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [Typo] Update MongoDB Notebook to acknlowedge >=M10 support (#3220) * [Typo] Update MongoDB Notebook to acknlowedge >=M10 support The notebook instructions state we support only >=M30 clusters for AutoGen. This is slightly misleading. We support >=M10 clusters or any cluster that allows for index creation from client code. This support is continually updating so this PR updates the language to reflect that. * Add link! --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Update Microsoft Fabric notebook (#3243) Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * Fix reference links (#3239) * fix broken reference links that's pointing to a page that doesn't exists * Fix 2 broken links and use the correct format --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Improve error messaging (#3236) * Update error language and corresponding tests * Updated another test to use the new error message * Recreated doc for Local LLMs - LiteLLM and Ollama - native function calling in Ollama (#3197) * Recreated documentation for Local LLMs - LiteLLM and Ollama * Added Docker = False for code execution example --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.Net] add SendAsync api to iterate group chat step by step (#3214) * add SendAsync api and tests * update example to use new sendAsync API * bump version and add release note (#3246) * update version * early support for anthropic, mistral api * Add additional tests to capture edge cases and more error conditions (#3237) * Add additional unit tests to capture additional edge cases * fix formatting issue (pre-commit) * [CAP] Added a factory for runtime (#3216) * Added Runtime Factory to support multiple implementations * Rename to ComponentEnsemble to ZMQRuntime * rename zmq_runtime * rename zmq_runtime * pre-commit fixes * pre-commit fix * pre-commit fixes and default runtime * pre-commit fixes * Rename constants * Rename Constants --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * [Feature]: Add global silent param for ConversableAgent (#3244) * feat: add is_silent param * modify param name * param doc * fix: silent only overwrite agent own * fix: change _is_silent to static method and receive verbose print * fix test failure * add kwargs for ConversableAgent subclass --------- Co-authored-by: gongwn1 <gongwn1@lenovo.com> Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: Umer Mansoor <umermk3@gmail.com> * Fix Issue #2880: Document the usage of the AAD auth (#2941) * Document the usage of the AAD auth. #2880 Added the document for the usage of AAD ! * Update website/docs/topics/llm_configuration.ipynb Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * Updated Location and Link to Azure OpenAI documentation * Update AutoTX Link on Gallery.json (#3082) Co-Authored-By: Qingyun Wu <qingyun0327@gmail.com> Co-Authored-By: Yiran Wu <32823396+yiranwu0@users.noreply.github.com> Co-Authored-By: Chi Wang <wang.chi@microsoft.com> * Making the required changes Updated function description and parameter description as well. Also, created the corresponding cs file for the t4 file. And created the new test case and updated the checks as well. * Revert "Making the required changes" By mistake * Update llm_configuration.ipynb --------- Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: Yiran Wu <32823396+yiranwu0@users.noreply.github.com> * only add the last message to chat history in SendAsync (#3272) * [.Net] Remove Azure.AI.OpenAI from AutoGen.DotnetInteractive package (#3274) * remove Azure.AI.OpenAI dependencies * fix build error * revert change * Correcting tool calling with Cohere (#3271) * Update cohere.py Key in the directory should be 'message' and not 'content' as it checks for message empty at a later point in code. * Update cohere.py Added required comments to the changes made in previous commit. * Stop retrieve more docs if all docs have been returned (#3282) * avoid circular import (#3276) Co-authored-by: gongwn1 <gongwn1@lenovo.com> Co-authored-by: Li Jiang <bnujli@gmail.com> * [.Net] Fix #3306 (#3310) * break conversation when orchestartor return null * enable test on different OS * [.Net] add DotnetInteractiveKernelBuilder to AutoGen.DotnetInteractive (#3317) * add DotnetInteractiveBuilder * update * fix workflow * add pwsh test * update * add extract code extension * update workflow * [.Net] Add AutoGen.AzureAIInference (#3332) * add AutoGen.AzureAIInference * add tests * update readme * fix format * Support async nested chats (#3309) * Allow async nested chats in agent chat * Fix pre-comit * Minor fix * Fix * Address feedback * Update * Fix build error --------- Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> * fix broken link to conversational chess example (#3327) Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> * Add last_speaker to GroupChatManager (#3318) * Add last_speaker to GroupChatManager's property * Add docstring for last_speaker * Format docstring * Fix to issue #3295 related to Anthropic bedrock (#3298) * Fix to consider session token in request * Formatted --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Fix message history limiter for tool call (#3178) * fix: message history limiter to support tool calls * add: pytest and docs for message history limiter for tool calls * Added keep_first_message for HistoryLimiter transform * Update to inbetween to between * Updated keep_first_message to non-optional, logic for history limiter * Update transforms.py * Update test_transforms to match utils introduction, add keep_first_message testing * Update test_transforms.py for pre-commit checks --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.Net][AutoGen.DotnetInteractive] add DotnetInteractiveStdioConnector (#3337) * add DotnetInteractiveStdioCOnector * update * update * comment out DotnetInteractive test * add header * update * Add latest gpt-4o model: `gpt-4o-2024-08-06` (#3329) Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> * version (#3343) * Removes Support For `TransformChatHistory` and `CompressibleAgent` (#3313) * remove old files * removes ci * removes faq --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Updated Program.cs for Autogen.BasicSample to give a menu driven window making it easier to run variou Agent config. (#3346) * Remove dependency on RetrieveAssistantAgent for RetrieveChat (#3320) * Remove deps on RetrieveAssistantAgent for getting human input * Terminate when no more context * Add deprecation warning message * Clean up RetrieveAssistantAgent, part 1 * Update version * Clean up docs and notebooks * Missing backticks breaking documentation (#3357) * Update Mistral client class to support new Mistral v1.0.1 package (#3356) * Update Mistral client class to support new Mistral v1.0.1 package * Remove comments * Refactored assistant/system role order, tidied imports and comments --------- Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> * adding a new page Function comparison between Python AutoGen and Auto… (#3367) * adding a new page Function comparison between Python AutoGen and AutoGen\.Net * add page to autogen website * Update README.md to use camera-ready (#3370) * Add OpenAI Gemini Example for VertexAI Notebook (#3290) * add openai-gemini example * fix exec numbering * improve isntructions * fix br tag * mention roles/aiplatform.user and fix markdown reference * remove mentioning the editor role, and only use the Vertex AI User role --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.NET] Add cache control to Anthropic Client (#3372) * Add cache control to anthropic client and write unit test & exampel * PR comments * Fix import ordering for build * Fix import orderings --------- Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> * Update human-in-the-loop.ipynb (#3379) * update contact information on the repo and release package (#3383) * update contact information on the repo and release package * update contact * update * fix format * [.Net] Dispose kernel after running dotnet interactive tests (#3378) * dispose kernel after running test * add timeout * Ensure 'name' on initial message (#2635) * Update to ensure name on initial messages * Corrected test cases for messages now including names. * Added name to messages within select speaker nested chat * Corrected select speaker group chat tests for name field --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> * [.Net] Enable step-by-step execution for two-agent conversation SendAsync API (#3360) * return iasync iterator in sendasync function * fix build error * Add contributor list via contributors.md (#3384) * add contributor list * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md; fix typos per code spell * Run pre-commit * Add link to contributors.md * Add link to contributors.md * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md --------- Co-authored-by: gagb <gagb@users.noreply.github.com> * Update CONTRIBUTORS.md; fix cell order (#3386) * Update CONTRIBUTORS.md; fix broken URL (#3387) * Update CONTRIBUTORS.md (#3393) * Update CONTRIBUTORS.md (#3391) Modify the display text of my github handle. * Add Language Agent Tree Search (LATS) notebook (#3376) * Add Language Agent Tree Search (LATS) notebook * removed outputs --------- Co-authored-by: Andy Zhou <andyzhou@4bd094a2-01.cloud.together.ai> Co-authored-by: Shaokun Zhang <shaokunzhang529@gmail.com> * [.Net] Release 0.1.0 (#3398) * update version and release note * Update MetaInfo.props * update release note * [.Net] Rename AutoGen.OpenAI to AutoGen.OpenAI.V1 (#3358) * fix build error * rename AutoGen.OpenAI to AutoGen.OpenAI.V1 * Update Docker.md;fix broken URL (#3399) This pull request includes a minor update to the CONTRIBUTORS.md file to correct the link to the Dockerfile README. * Fix QdrantVectorDB to use custom embedding_function when provided, defaulting to FastEmbedEmbeddingFunction() otherwise (#3396) Co-authored-by: Li Jiang <bnujli@gmail.com> * Add mongodb to topic guide (#3400) * Ability to add MessageTransforms to the GroupChat's Select Speaker nested chat (speaker_selection_method='auto') (#2719) * Initial commit with ability to add transforms to GroupChat * Added tests * Tidy up * Tidy up of variable names and commented out test * Tidy up comment * Update import to relative * Added documentation topic for transform messages for speaker selection. * Added handling of non-core module, transforms, in groupchat * Adjusted parameter if module not loaded. * Updated groupchat test which failed during CI/CD --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Fix for group chat resume - full chat history for each agent (#3412) * Update agent_chat.md;Fix broken URL (#3416) This pull request includes a minor update to the agent_chat.md file to correct the link to the Enhanced Inference. Co-authored-by: gagb <gagb@users.noreply.github.com> * Add None back to remove_termination_string (#3410) Co-authored-by: Li Jiang <bnujli@gmail.com> * Amazon Bedrock Client for AutoGen (#3232) * intial commit for aws-bedrock * format * converse setup for model req-response * Renamed to bedrock.py, updated parameter parsing, system message extraction, client class incorporation * Established Bedrock class based on @astroalek and @ChristianT's code, added ability to disable system prompt separation * Image parsing and removing access credential checks * Added tests, added additional parameter support * Amazon Bedrock documentation * Moved client parameters to init, align parameter names with Anthropic, spelling fix, remove unnecessary imports, use base and additional parameters, update documentation * Tidy up comments * Minor typo fix * Correct comment re aws_region --------- Co-authored-by: Mark Sze <mark@sze.family> Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Li Jiang <bnujli@gmail.com> * fix `ImportFromModule` is unhashable issue (#3362) * ImportFromModule is unhashable This fix makes the conversion to string prior to the deduplication to avoid this issue * add type annotation for global_imports * meet code formatting check --------- Co-authored-by: zcipod <zcipod@gmail.com> Co-authored-by: Li Jiang <bnujli@gmail.com> * Transform to add an agent's name into the message content (#3334) * Initial commit with ability to add name into content with a transform * Transforms documentation * Fix transform links in documentation --------- Co-authored-by: Li Jiang <bnujli@gmail.com> * Update gallery.json (#3414) Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: gagb <gagb@users.noreply.github.com> * update contributors (#3420) * Update privacy link in readme and add Consumer Health Privacy notice on website (#3422) * Add studio pre-print (#3423) * Add studio pre-print * Fix formatting * Update README.md (#3424) * Update README.md * Fix formatting errors * Update CITATION.cff (#3427) Improve consistency with rest of the repo. * Add details about GAIA benchmark evaluation (#3433) * Add missing contributors (#3426) * Update package.json, remove gh-pages dep (#3435) Remove gh-pages dependency (not needed at the time) * [.Net] Add AutoGen.OpenAI package that uses OpenAI v2 SDK (#3402) * udpate * add sample to connect to azure oai * update comment * ping to beta5 * add openai tests * format code * add structural output example * update comment * fix test * resolve comments * fix format issue * update sk version * remove error print stmnt --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Israel de la Cruz <banense@gmail.com> Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: David Luong <davidluong98@gmail.com> Co-authored-by: HRUSHIKESH DOKALA <96101829+Hk669@users.noreply.github.com> Co-authored-by: Rick <ruiwangwarm@gmail.com> Co-authored-by: Rajan <rajan.chari@yahoo.com> Co-authored-by: Michael <34828001+michaelhaggerty@users.noreply.github.com> Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> Co-authored-by: Beibin Li <BeibinLi@users.noreply.github.com> Co-authored-by: Krishna Shedbalkar <60742358+krishnashed@users.noreply.github.com> Co-authored-by: Rob <rob@rauxsoftware.com> Co-authored-by: Ian <ArGregoryIan@gmail.com> Co-authored-by: jtoy <jasontoy@gmail.com> Co-authored-by: Jason <jtoy@grids.local> Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Wael Karkoub <wael.karkoub96@gmail.com> Co-authored-by: Nikolaytv <32233366+NikolayTV@users.noreply.github.com> Co-authored-by: nikolay tolstov <datatraxer@gmail.com> Co-authored-by: pk673 <123758881+pk673@users.noreply.github.com> Co-authored-by: Aretai-Leah <147453745+Aretai-Leah@users.noreply.github.com> Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: luongdavid <luongdavid@microsoft.com> Co-authored-by: Audel Rouhi <knucklessg1@gmail.com> Co-authored-by: Li Jiang <lijiang1@microsoft.com> Co-authored-by: TJ <63432918+tj-cycyota@users.noreply.github.com> Co-authored-by: ikarapanca <ilkerkarapanca@gmail.com> Co-authored-by: Mark Ward <90335263+MarkWard0110@users.noreply.github.com> Co-authored-by: Wei <21039366+Mai0313@users.noreply.github.com> Co-authored-by: Diego Colombo <colombod@me.com> Co-authored-by: Zoltan Lux <lux.zoltan.andras@gmail.com> Co-authored-by: Zoltan Lux <z.lux@campus.tu-berlin.de> Co-authored-by: afourney <adamfo@microsoft.com> Co-authored-by: aswny <87371411+aswny@users.noreply.github.com> Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com> Co-authored-by: Braelyn Boynton <bboynton97@gmail.com> Co-authored-by: reibs <areibman@gmail.com> Co-authored-by: Howard Gil <howardbgil@gmail.com> Co-authored-by: Alex Reibman <meta.alex.r@gmail.com> Co-authored-by: Daniel (Neng) Wang <37800725+Noir97@users.noreply.github.com> Co-authored-by: Davor Runje <davor@airt.ai> Co-authored-by: ken-gravilon <ken@gravilon.eu> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Yiran Wu <32823396+yiranwu0@users.noreply.github.com> Co-authored-by: Yiran Wu <32823396+kevin666aa@users.noreply.github.com> Co-authored-by: thetechoddbug (José María Gutiérrez) <TheTechOddBug@users.noreply.github.com> Co-authored-by: whichxjy <whichxjy@gmail.com> Co-authored-by: LeoLjl <jjl7199@psu.edu> Co-authored-by: Linxin Song <rm.social.song1@gmail.com> Co-authored-by: Qingyun Wu <qingyun0327@gmail.com> Co-authored-by: AutoGen-Hub <flaml20201204@gmail.com> Co-authored-by: Hk669 <hrushi669@gmail.com> Co-authored-by: Olaoluwa Ademola Salami <olaoluwaasalami@gmail.com> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> Co-authored-by: Mark Sze <mark@sze.family> Co-authored-by: Shobhit Vishnoi <69042101+ShobhitVishnoi30@users.noreply.github.com> Co-authored-by: NanthagopalEswaran <108973528+NanthagopalEswaran@users.noreply.github.com> Co-authored-by: kevin666aa <yrwu000627@gmail.com> Co-authored-by: Garner Fox McCloud <garnermccloud@gmail.com> Co-authored-by: James Woffinden-Luey <90423712+jluey1@users.noreply.github.com> Co-authored-by: Jeffrey Su <zsu@senparc.com> Co-authored-by: William W Wang <107702013+wmwxwa@users.noreply.github.com> Co-authored-by: Julia Kiseleva <julianakiseleva@gmail.com> Co-authored-by: F. Hinkelmann <franziska.hinkelmann@gmail.com> Co-authored-by: Media <12145726+rihp@users.noreply.github.com> Co-authored-by: Shaun <mr.wrfly@gmail.com> Co-authored-by: Prithvi <itsmeprithvi2000@gmail.com> Co-authored-by: Anush <anushshetty90@gmail.com> Co-authored-by: Luca <tankado@live.it> Co-authored-by: Hugh Lyu <hugh@tiwater.com> Co-authored-by: Nikita Fedyashev <nfedyashev+github@gmail.com> Co-authored-by: Umer Mansoor <umermk3@gmail.com> Co-authored-by: Manojkumar Kotakonda <44414430+makkzone@users.noreply.github.com> Co-authored-by: Sugato Ray <sugatoray@users.noreply.github.com> Co-authored-by: Adil Khalil <adilkhalil@outlook.com> Co-authored-by: Joris van Raaij <82571322+joris-swapfiets@users.noreply.github.com> Co-authored-by: Tristan Jin <52938917+tjin88@users.noreply.github.com> Co-authored-by: Fabian Valle <ranfys.valle@gmail.com> Co-authored-by: Fabian Valle <fabian.valle-simmons@mongodb.com> Co-authored-by: Casey Clements <casey.clements@mongodb.com> Co-authored-by: Jib <jib.adegunloye@mongodb.com> Co-authored-by: Jib <Jibzade@gmail.com> Co-authored-by: Cozypet <yanhan860711@gmail.com> Co-authored-by: wenngong <76683249+wenngong@users.noreply.github.com> Co-authored-by: gongwn1 <gongwn1@lenovo.com> Co-authored-by: Cell <shmilysyg@gmail.com> Co-authored-by: Jatin Shridhar <shridhar.jatin@gmail.com> Co-authored-by: Jay <jaygdesai@gmail.com> Co-authored-by: Aamir <48929123+heyitsaamir@users.noreply.github.com> Co-authored-by: Alexander Lundervold <alexander.lundervold@gmail.com> Co-authored-by: Gaoxiang Luo <gluo0401@gmail.com> Co-authored-by: Chaitanya Belwal <cbelwal@gmail.com> Co-authored-by: Henry Kobin <henry.kobin@gmail.com> Co-authored-by: gagb <gagb@users.noreply.github.com> Co-authored-by: morris.liu <8832717+realMorrisLiu@users.noreply.github.com> Co-authored-by: Ricky Loynd <riloynd@microsoft.com> Co-authored-by: Andy Zhou <andyzhou1989@gmail.com> Co-authored-by: Andy Zhou <andyzhou@4bd094a2-01.cloud.together.ai> Co-authored-by: Shaokun Zhang <shaokunzhang529@gmail.com> Co-authored-by: New-World-2019 <37373361+New-World-2019@users.noreply.github.com> Co-authored-by: Eddy Fidel <eddy.fidel0809@gmail.com> Co-authored-by: zcipod <45870019+zcipod@users.noreply.github.com> Co-authored-by: zcipod <zcipod@gmail.com> Co-authored-by: Kirushikesh DB <49152921+Kirushikesh@users.noreply.github.com> Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
This commit is contained in:
parent
4193cea267
commit
4f9383ac21
|
@ -9,6 +9,9 @@ autogenstudio/web/workdir/*
|
|||
autogenstudio/web/ui/*
|
||||
autogenstudio/web/skills/user/*
|
||||
.release.sh
|
||||
.nightly.sh
|
||||
|
||||
notebooks/work_dir/*
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
|
|
|
@ -1,7 +1,5 @@
|
|||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
from queue import Queue
|
||||
from typing import Any, Dict, List, Optional, Tuple, Union
|
||||
|
@ -9,12 +7,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union
|
|||
import websockets
|
||||
from fastapi import WebSocket, WebSocketDisconnect
|
||||
|
||||
from .datamodel import Message, SocketMessage, Workflow
|
||||
from .utils import (
|
||||
extract_successful_code_blocks,
|
||||
get_modified_files,
|
||||
summarize_chat_history,
|
||||
)
|
||||
from .datamodel import Message
|
||||
from .workflowmanager import WorkflowManager
|
||||
|
||||
|
||||
|
@ -82,76 +75,12 @@ class AutoGenChatManager:
|
|||
connection_id=connection_id,
|
||||
)
|
||||
|
||||
workflow = Workflow.model_validate(workflow)
|
||||
|
||||
message_text = message.content.strip()
|
||||
result_message: Message = workflow_manager.run(message=f"{message_text}", clear_history=False, history=history)
|
||||
|
||||
start_time = time.time()
|
||||
workflow_manager.run(message=f"{message_text}", clear_history=False)
|
||||
end_time = time.time()
|
||||
|
||||
metadata = {
|
||||
"messages": workflow_manager.agent_history,
|
||||
"summary_method": workflow.summary_method,
|
||||
"time": end_time - start_time,
|
||||
"files": get_modified_files(start_time, end_time, source_dir=work_dir),
|
||||
}
|
||||
|
||||
output = self._generate_output(message_text, workflow_manager, workflow)
|
||||
|
||||
output_message = Message(
|
||||
user_id=message.user_id,
|
||||
role="assistant",
|
||||
content=output,
|
||||
meta=json.dumps(metadata),
|
||||
session_id=message.session_id,
|
||||
)
|
||||
|
||||
return output_message
|
||||
|
||||
def _generate_output(
|
||||
self,
|
||||
message_text: str,
|
||||
workflow_manager: WorkflowManager,
|
||||
workflow: Workflow,
|
||||
) -> str:
|
||||
"""
|
||||
Generates the output response based on the workflow configuration and agent history.
|
||||
|
||||
:param message_text: The text of the incoming message.
|
||||
:param flow: An instance of `WorkflowManager`.
|
||||
:param flow_config: An instance of `AgentWorkFlowConfig`.
|
||||
:return: The output response as a string.
|
||||
"""
|
||||
|
||||
output = ""
|
||||
if workflow.summary_method == "last":
|
||||
successful_code_blocks = extract_successful_code_blocks(workflow_manager.agent_history)
|
||||
last_message = (
|
||||
workflow_manager.agent_history[-1]["message"]["content"] if workflow_manager.agent_history else ""
|
||||
)
|
||||
successful_code_blocks = "\n\n".join(successful_code_blocks)
|
||||
output = (last_message + "\n" + successful_code_blocks) if successful_code_blocks else last_message
|
||||
elif workflow.summary_method == "llm":
|
||||
client = workflow_manager.receiver.client
|
||||
status_message = SocketMessage(
|
||||
type="agent_status",
|
||||
data={
|
||||
"status": "summarizing",
|
||||
"message": "Summarizing agent dialogue",
|
||||
},
|
||||
connection_id=workflow_manager.connection_id,
|
||||
)
|
||||
self.send(status_message.dict())
|
||||
output = summarize_chat_history(
|
||||
task=message_text,
|
||||
messages=workflow_manager.agent_history,
|
||||
client=client,
|
||||
)
|
||||
|
||||
elif workflow.summary_method == "none":
|
||||
output = ""
|
||||
return output
|
||||
result_message.user_id = message.user_id
|
||||
result_message.session_id = message.session_id
|
||||
return result_message
|
||||
|
||||
|
||||
class WebSocketConnectionManager:
|
||||
|
|
|
@ -16,7 +16,7 @@ def ui(
|
|||
port: int = 8081,
|
||||
workers: int = 1,
|
||||
reload: Annotated[bool, typer.Option("--reload")] = False,
|
||||
docs: bool = False,
|
||||
docs: bool = True,
|
||||
appdir: str = None,
|
||||
database_uri: Optional[str] = None,
|
||||
):
|
||||
|
@ -48,6 +48,39 @@ def ui(
|
|||
)
|
||||
|
||||
|
||||
@app.command()
|
||||
def serve(
|
||||
workflow: str = "",
|
||||
host: str = "127.0.0.1",
|
||||
port: int = 8084,
|
||||
workers: int = 1,
|
||||
docs: bool = False,
|
||||
):
|
||||
"""
|
||||
Serve an API Endpoint based on an AutoGen Studio workflow json file.
|
||||
|
||||
Args:
|
||||
workflow (str): Path to the workflow json file.
|
||||
host (str, optional): Host to run the UI on. Defaults to 127.0.0.1 (localhost).
|
||||
port (int, optional): Port to run the UI on. Defaults to 8081.
|
||||
workers (int, optional): Number of workers to run the UI with. Defaults to 1.
|
||||
reload (bool, optional): Whether to reload the UI on code changes. Defaults to False.
|
||||
docs (bool, optional): Whether to generate API docs. Defaults to False.
|
||||
|
||||
"""
|
||||
|
||||
os.environ["AUTOGENSTUDIO_API_DOCS"] = str(docs)
|
||||
os.environ["AUTOGENSTUDIO_WORKFLOW_FILE"] = workflow
|
||||
|
||||
uvicorn.run(
|
||||
"autogenstudio.web.serve:app",
|
||||
host=host,
|
||||
port=port,
|
||||
workers=workers,
|
||||
reload=False,
|
||||
)
|
||||
|
||||
|
||||
@app.command()
|
||||
def version():
|
||||
"""
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
|
@ -15,15 +16,23 @@ from ..datamodel import (
|
|||
Skill,
|
||||
Workflow,
|
||||
WorkflowAgentLink,
|
||||
WorkflowAgentType,
|
||||
)
|
||||
from .utils import init_db_samples
|
||||
|
||||
valid_link_types = ["agent_model", "agent_skill", "agent_agent", "workflow_agent"]
|
||||
|
||||
|
||||
class WorkflowAgentMap(SQLModel):
|
||||
agent: Agent
|
||||
link: WorkflowAgentLink
|
||||
|
||||
|
||||
class DBManager:
|
||||
"""A class to manage database operations"""
|
||||
|
||||
_init_lock = threading.Lock() # Class-level lock
|
||||
|
||||
def __init__(self, engine_uri: str):
|
||||
connection_args = {"check_same_thread": True} if "sqlite" in engine_uri else {}
|
||||
self.engine = create_engine(engine_uri, connect_args=connection_args)
|
||||
|
@ -31,14 +40,15 @@ class DBManager:
|
|||
|
||||
def create_db_and_tables(self):
|
||||
"""Create a new database and tables"""
|
||||
try:
|
||||
SQLModel.metadata.create_all(self.engine)
|
||||
with self._init_lock: # Use the lock
|
||||
try:
|
||||
init_db_samples(self)
|
||||
SQLModel.metadata.create_all(self.engine)
|
||||
try:
|
||||
init_db_samples(self)
|
||||
except Exception as e:
|
||||
logger.info("Error while initializing database samples: " + str(e))
|
||||
except Exception as e:
|
||||
logger.info("Error while initializing database samples: " + str(e))
|
||||
except Exception as e:
|
||||
logger.info("Error while creating database tables:" + str(e))
|
||||
logger.info("Error while creating database tables:" + str(e))
|
||||
|
||||
def upsert(self, model: SQLModel):
|
||||
"""Create a new entity"""
|
||||
|
@ -62,7 +72,7 @@ class DBManager:
|
|||
session.refresh(model)
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
logger.error("Error while upserting %s", e)
|
||||
logger.error("Error while updating " + str(model_class.__name__) + ": " + str(e))
|
||||
status = False
|
||||
|
||||
response = Response(
|
||||
|
@ -115,7 +125,7 @@ class DBManager:
|
|||
session.rollback()
|
||||
status = False
|
||||
status_message = f"Error while fetching {model_class.__name__}"
|
||||
logger.error("Error while getting %s: %s", model_class.__name__, e)
|
||||
logger.error("Error while getting items: " + str(model_class.__name__) + " " + str(e))
|
||||
|
||||
response: Response = Response(
|
||||
message=status_message,
|
||||
|
@ -157,16 +167,16 @@ class DBManager:
|
|||
status_message = f"{model_class.__name__} Deleted Successfully"
|
||||
else:
|
||||
print(f"Row with filters {filters} not found")
|
||||
logger.info("Row with filters %s not found", filters)
|
||||
logger.info("Row with filters + filters + not found")
|
||||
status_message = "Row not found"
|
||||
except exc.IntegrityError as e:
|
||||
session.rollback()
|
||||
logger.error("Integrity ... Error while deleting: %s", e)
|
||||
logger.error("Integrity ... Error while deleting: " + str(e))
|
||||
status_message = f"The {model_class.__name__} is linked to another entity and cannot be deleted."
|
||||
status = False
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
logger.error("Error while deleting: %s", e)
|
||||
logger.error("Error while deleting: " + str(e))
|
||||
status_message = f"Error while deleting: {e}"
|
||||
status = False
|
||||
response = Response(
|
||||
|
@ -182,6 +192,7 @@ class DBManager:
|
|||
primary_id: int,
|
||||
return_json: bool = False,
|
||||
agent_type: Optional[str] = None,
|
||||
sequence_id: Optional[int] = None,
|
||||
):
|
||||
"""
|
||||
Get all entities linked to the primary entity.
|
||||
|
@ -217,19 +228,21 @@ class DBManager:
|
|||
linked_entities = agent.agents
|
||||
elif link_type == "workflow_agent":
|
||||
linked_entities = session.exec(
|
||||
select(Agent)
|
||||
.join(WorkflowAgentLink)
|
||||
select(WorkflowAgentLink, Agent)
|
||||
.join(Agent, WorkflowAgentLink.agent_id == Agent.id)
|
||||
.where(
|
||||
WorkflowAgentLink.workflow_id == primary_id,
|
||||
WorkflowAgentLink.agent_type == agent_type,
|
||||
)
|
||||
).all()
|
||||
|
||||
linked_entities = [WorkflowAgentMap(agent=agent, link=link) for link, agent in linked_entities]
|
||||
linked_entities = sorted(linked_entities, key=lambda x: x.link.sequence_id) # type: ignore
|
||||
except Exception as e:
|
||||
logger.error("Error while getting linked entities: %s", e)
|
||||
logger.error("Error while getting linked entities: " + str(e))
|
||||
status_message = f"Error while getting linked entities: {e}"
|
||||
status = False
|
||||
if return_json:
|
||||
linked_entities = [self._model_to_dict(row) for row in linked_entities]
|
||||
linked_entities = [row.model_dump() for row in linked_entities]
|
||||
|
||||
response = Response(
|
||||
message=status_message,
|
||||
|
@ -245,6 +258,7 @@ class DBManager:
|
|||
primary_id: int,
|
||||
secondary_id: int,
|
||||
agent_type: Optional[str] = None,
|
||||
sequence_id: Optional[int] = None,
|
||||
) -> Response:
|
||||
"""
|
||||
Link two entities together.
|
||||
|
@ -357,6 +371,7 @@ class DBManager:
|
|||
WorkflowAgentLink.workflow_id == primary_id,
|
||||
WorkflowAgentLink.agent_id == secondary_id,
|
||||
WorkflowAgentLink.agent_type == agent_type,
|
||||
WorkflowAgentLink.sequence_id == sequence_id,
|
||||
)
|
||||
).first()
|
||||
if existing_link:
|
||||
|
@ -373,6 +388,7 @@ class DBManager:
|
|||
workflow_id=primary_id,
|
||||
agent_id=secondary_id,
|
||||
agent_type=agent_type,
|
||||
sequence_id=sequence_id,
|
||||
)
|
||||
session.add(workflow_agent_link)
|
||||
# add and commit the link
|
||||
|
@ -385,7 +401,7 @@ class DBManager:
|
|||
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
logger.error("Error while linking: %s", e)
|
||||
logger.error("Error while linking: " + str(e))
|
||||
status = False
|
||||
status_message = f"Error while linking due to an exception: {e}"
|
||||
|
||||
|
@ -402,6 +418,7 @@ class DBManager:
|
|||
primary_id: int,
|
||||
secondary_id: int,
|
||||
agent_type: Optional[str] = None,
|
||||
sequence_id: Optional[int] = 0,
|
||||
) -> Response:
|
||||
"""
|
||||
Unlink two entities.
|
||||
|
@ -417,6 +434,7 @@ class DBManager:
|
|||
"""
|
||||
status = True
|
||||
status_message = ""
|
||||
print("primary", primary_id, "secondary", secondary_id, "sequence", sequence_id, "agent_type", agent_type)
|
||||
|
||||
if link_type not in valid_link_types:
|
||||
status = False
|
||||
|
@ -452,6 +470,7 @@ class DBManager:
|
|||
WorkflowAgentLink.workflow_id == primary_id,
|
||||
WorkflowAgentLink.agent_id == secondary_id,
|
||||
WorkflowAgentLink.agent_type == agent_type,
|
||||
WorkflowAgentLink.sequence_id == sequence_id,
|
||||
)
|
||||
).first()
|
||||
|
||||
|
@ -465,7 +484,7 @@ class DBManager:
|
|||
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
logger.error("Error while unlinking: %s", e)
|
||||
logger.error("Error while unlinking: " + str(e))
|
||||
status = False
|
||||
status_message = f"Error while unlinking due to an exception: {e}"
|
||||
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -20,6 +20,16 @@ SQLModel.model_config["protected_namespaces"] = ()
|
|||
# pylint: disable=protected-access
|
||||
|
||||
|
||||
class MessageMeta(SQLModel, table=False):
|
||||
task: Optional[str] = None
|
||||
messages: Optional[List[Dict[str, Any]]] = None
|
||||
summary_method: Optional[str] = "last"
|
||||
files: Optional[List[dict]] = None
|
||||
time: Optional[datetime] = None
|
||||
log: Optional[List[dict]] = None
|
||||
usage: Optional[List[dict]] = None
|
||||
|
||||
|
||||
class Message(SQLModel, table=True):
|
||||
__table_args__ = {"sqlite_autoincrement": True}
|
||||
id: Optional[int] = Field(default=None, primary_key=True)
|
||||
|
@ -38,7 +48,7 @@ class Message(SQLModel, table=True):
|
|||
default=None, sa_column=Column(Integer, ForeignKey("session.id", ondelete="CASCADE"))
|
||||
)
|
||||
connection_id: Optional[str] = None
|
||||
meta: Optional[Dict] = Field(default={}, sa_column=Column(JSON))
|
||||
meta: Optional[Union[MessageMeta, dict]] = Field(default={}, sa_column=Column(JSON))
|
||||
|
||||
|
||||
class Session(SQLModel, table=True):
|
||||
|
@ -82,11 +92,12 @@ class Skill(SQLModel, table=True):
|
|||
sa_column=Column(DateTime(timezone=True), onupdate=func.now()),
|
||||
) # pylint: disable=not-callable
|
||||
user_id: Optional[str] = None
|
||||
version: Optional[str] = "0.0.1"
|
||||
name: str
|
||||
content: str
|
||||
description: Optional[str] = None
|
||||
secrets: Optional[Dict] = Field(default={}, sa_column=Column(JSON))
|
||||
libraries: Optional[Dict] = Field(default={}, sa_column=Column(JSON))
|
||||
secrets: Optional[List[dict]] = Field(default_factory=list, sa_column=Column(JSON))
|
||||
libraries: Optional[List[str]] = Field(default_factory=list, sa_column=Column(JSON))
|
||||
agents: List["Agent"] = Relationship(back_populates="skills", link_model=AgentSkillLink)
|
||||
|
||||
|
||||
|
@ -97,7 +108,7 @@ class LLMConfig(SQLModel, table=False):
|
|||
temperature: float = 0
|
||||
cache_seed: Optional[Union[int, None]] = None
|
||||
timeout: Optional[int] = None
|
||||
max_tokens: Optional[int] = 1000
|
||||
max_tokens: Optional[int] = 2048
|
||||
extra_body: Optional[dict] = None
|
||||
|
||||
|
||||
|
@ -105,6 +116,10 @@ class ModelTypes(str, Enum):
|
|||
openai = "open_ai"
|
||||
google = "google"
|
||||
azure = "azure"
|
||||
anthropic = "anthropic"
|
||||
mistral = "mistral"
|
||||
together = "together"
|
||||
groq = "groq"
|
||||
|
||||
|
||||
class Model(SQLModel, table=True):
|
||||
|
@ -119,6 +134,7 @@ class Model(SQLModel, table=True):
|
|||
sa_column=Column(DateTime(timezone=True), onupdate=func.now()),
|
||||
) # pylint: disable=not-callable
|
||||
user_id: Optional[str] = None
|
||||
version: Optional[str] = "0.0.1"
|
||||
model: str
|
||||
api_key: Optional[str] = None
|
||||
base_url: Optional[str] = None
|
||||
|
@ -164,6 +180,7 @@ class WorkflowAgentType(str, Enum):
|
|||
sender = "sender"
|
||||
receiver = "receiver"
|
||||
planner = "planner"
|
||||
sequential = "sequential"
|
||||
|
||||
|
||||
class WorkflowAgentLink(SQLModel, table=True):
|
||||
|
@ -174,6 +191,7 @@ class WorkflowAgentLink(SQLModel, table=True):
|
|||
default=WorkflowAgentType.sender,
|
||||
sa_column=Column(SqlEnum(WorkflowAgentType), primary_key=True),
|
||||
)
|
||||
sequence_id: Optional[int] = Field(default=0, primary_key=True)
|
||||
|
||||
|
||||
class AgentLink(SQLModel, table=True):
|
||||
|
@ -194,8 +212,9 @@ class Agent(SQLModel, table=True):
|
|||
sa_column=Column(DateTime(timezone=True), onupdate=func.now()),
|
||||
) # pylint: disable=not-callable
|
||||
user_id: Optional[str] = None
|
||||
version: Optional[str] = "0.0.1"
|
||||
type: AgentType = Field(default=AgentType.assistant, sa_column=Column(SqlEnum(AgentType)))
|
||||
config: AgentConfig = Field(default_factory=AgentConfig, sa_column=Column(JSON))
|
||||
config: Union[AgentConfig, dict] = Field(default_factory=AgentConfig, sa_column=Column(JSON))
|
||||
skills: List[Skill] = Relationship(back_populates="agents", link_model=AgentSkillLink)
|
||||
models: List[Model] = Relationship(back_populates="agents", link_model=AgentModelLink)
|
||||
workflows: List["Workflow"] = Relationship(link_model=WorkflowAgentLink, back_populates="agents")
|
||||
|
@ -215,11 +234,12 @@ class Agent(SQLModel, table=True):
|
|||
secondaryjoin="Agent.id==AgentLink.agent_id",
|
||||
),
|
||||
)
|
||||
task_instruction: Optional[str] = None
|
||||
|
||||
|
||||
class WorkFlowType(str, Enum):
|
||||
twoagents = "twoagents"
|
||||
groupchat = "groupchat"
|
||||
autonomous = "autonomous"
|
||||
sequential = "sequential"
|
||||
|
||||
|
||||
class WorkFlowSummaryMethod(str, Enum):
|
||||
|
@ -240,14 +260,16 @@ class Workflow(SQLModel, table=True):
|
|||
sa_column=Column(DateTime(timezone=True), onupdate=func.now()),
|
||||
) # pylint: disable=not-callable
|
||||
user_id: Optional[str] = None
|
||||
version: Optional[str] = "0.0.1"
|
||||
name: str
|
||||
description: str
|
||||
agents: List[Agent] = Relationship(back_populates="workflows", link_model=WorkflowAgentLink)
|
||||
type: WorkFlowType = Field(default=WorkFlowType.twoagents, sa_column=Column(SqlEnum(WorkFlowType)))
|
||||
type: WorkFlowType = Field(default=WorkFlowType.autonomous, sa_column=Column(SqlEnum(WorkFlowType)))
|
||||
summary_method: Optional[WorkFlowSummaryMethod] = Field(
|
||||
default=WorkFlowSummaryMethod.last,
|
||||
sa_column=Column(SqlEnum(WorkFlowSummaryMethod)),
|
||||
)
|
||||
sample_tasks: Optional[List[str]] = Field(default_factory=list, sa_column=Column(JSON))
|
||||
|
||||
|
||||
class Response(SQLModel):
|
||||
|
|
|
@ -0,0 +1,108 @@
|
|||
# metrics - agent_frequency, execution_count, tool_count,
|
||||
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from .datamodel import Message, MessageMeta
|
||||
|
||||
|
||||
class Profiler:
|
||||
"""
|
||||
Profiler class to profile agent task runs and compute metrics
|
||||
for performance evaluation.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.metrics: List[Dict] = []
|
||||
|
||||
def _is_code(self, message: Message) -> bool:
|
||||
"""
|
||||
Check if the message contains code.
|
||||
|
||||
:param message: The message instance to check.
|
||||
:return: True if the message contains code, False otherwise.
|
||||
"""
|
||||
content = message.get("message").get("content").lower()
|
||||
return "```" in content
|
||||
|
||||
def _is_tool(self, message: Message) -> bool:
|
||||
"""
|
||||
Check if the message uses a tool.
|
||||
|
||||
:param message: The message instance to check.
|
||||
:return: True if the message uses a tool, False otherwise.
|
||||
"""
|
||||
content = message.get("message").get("content").lower()
|
||||
return "from skills import" in content
|
||||
|
||||
def _is_code_execution(self, message: Message) -> bool:
|
||||
"""
|
||||
Check if the message indicates code execution.
|
||||
|
||||
:param message: The message instance to check.
|
||||
:return: dict with is_code and status keys.
|
||||
"""
|
||||
content = message.get("message").get("content").lower()
|
||||
if "exitcode:" in content:
|
||||
status = "exitcode: 0" in content
|
||||
return {"is_code": True, "status": status}
|
||||
else:
|
||||
return {"is_code": False, "status": False}
|
||||
|
||||
def _is_terminate(self, message: Message) -> bool:
|
||||
"""
|
||||
Check if the message indicates termination.
|
||||
|
||||
:param message: The message instance to check.
|
||||
:return: True if the message indicates termination, False otherwise.
|
||||
"""
|
||||
content = message.get("message").get("content").lower()
|
||||
return "terminate" in content
|
||||
|
||||
def profile(self, agent_message: Message):
|
||||
"""
|
||||
Profile the agent task run and compute metrics.
|
||||
|
||||
:param agent: The agent instance that ran the task.
|
||||
:param task: The task instance that was run.
|
||||
"""
|
||||
meta = MessageMeta(**agent_message.meta)
|
||||
print(meta.log)
|
||||
usage = meta.usage
|
||||
messages = meta.messages
|
||||
profile = []
|
||||
bar = []
|
||||
stats = {}
|
||||
total_code_executed = 0
|
||||
success_code_executed = 0
|
||||
agents = []
|
||||
for message in messages:
|
||||
agent = message.get("sender")
|
||||
is_code = self._is_code(message)
|
||||
is_tool = self._is_tool(message)
|
||||
is_code_execution = self._is_code_execution(message)
|
||||
total_code_executed += is_code_execution["is_code"]
|
||||
success_code_executed += 1 if is_code_execution["status"] else 0
|
||||
|
||||
row = {
|
||||
"agent": agent,
|
||||
"tool_call": is_code,
|
||||
"code_execution": is_code_execution,
|
||||
"terminate": self._is_terminate(message),
|
||||
}
|
||||
bar_row = {
|
||||
"agent": agent,
|
||||
"tool_call": "tool call" if is_tool else "no tool call",
|
||||
"code_execution": (
|
||||
"success"
|
||||
if is_code_execution["status"]
|
||||
else "failure" if is_code_execution["is_code"] else "no code"
|
||||
),
|
||||
"message": 1,
|
||||
}
|
||||
profile.append(row)
|
||||
bar.append(bar_row)
|
||||
agents.append(agent)
|
||||
code_success_rate = (success_code_executed / total_code_executed if total_code_executed > 0 else 0) * 100
|
||||
stats["code_success_rate"] = code_success_rate
|
||||
stats["total_code_executed"] = total_code_executed
|
||||
return {"profile": profile, "bar": bar, "stats": stats, "agents": set(agents), "usage": usage}
|
|
@ -289,7 +289,7 @@ def init_app_folders(app_file_path: str) -> Dict[str, str]:
|
|||
return folders
|
||||
|
||||
|
||||
def get_skills_from_prompt(skills: List[Skill], work_dir: str) -> str:
|
||||
def get_skills_prompt(skills: List[Skill], work_dir: str) -> str:
|
||||
"""
|
||||
Create a prompt with the content of all skills and write the skills to a file named skills.py in the work_dir.
|
||||
|
||||
|
@ -306,9 +306,48 @@ install via pip and use --quiet option.
|
|||
|
||||
"""
|
||||
prompt = "" # filename: skills.py
|
||||
|
||||
for skill in skills:
|
||||
if not isinstance(skill, Skill):
|
||||
skill = Skill(**skill)
|
||||
if skill.secrets:
|
||||
for secret in skill.secrets:
|
||||
if secret.get("value") is not None:
|
||||
os.environ[secret["secret"]] = secret["value"]
|
||||
prompt += f"""
|
||||
|
||||
##### Begin of {skill.name} #####
|
||||
from skills import {skill.name} # Import the function from skills.py
|
||||
|
||||
{skill.content}
|
||||
|
||||
#### End of {skill.name} ####
|
||||
|
||||
"""
|
||||
|
||||
return instruction + prompt
|
||||
|
||||
|
||||
def save_skills_to_file(skills: List[Skill], work_dir: str) -> None:
|
||||
"""
|
||||
Write the skills to a file named skills.py in the work_dir.
|
||||
|
||||
:param skills: A dictionary skills
|
||||
"""
|
||||
|
||||
# TBD: Double check for duplicate skills?
|
||||
|
||||
# check if work_dir exists
|
||||
if not os.path.exists(work_dir):
|
||||
os.makedirs(work_dir)
|
||||
|
||||
skills_content = ""
|
||||
for skill in skills:
|
||||
if not isinstance(skill, Skill):
|
||||
skill = Skill(**skill)
|
||||
|
||||
skills_content += f"""
|
||||
|
||||
##### Begin of {skill.name} #####
|
||||
|
||||
{skill.content}
|
||||
|
@ -317,15 +356,9 @@ install via pip and use --quiet option.
|
|||
|
||||
"""
|
||||
|
||||
# check if work_dir exists
|
||||
if not os.path.exists(work_dir):
|
||||
os.makedirs(work_dir)
|
||||
|
||||
# overwrite skills.py in work_dir
|
||||
with open(os.path.join(work_dir, "skills.py"), "w", encoding="utf-8") as f:
|
||||
f.write(prompt)
|
||||
|
||||
return instruction + prompt
|
||||
f.write(skills_content)
|
||||
|
||||
|
||||
def delete_files_in_folder(folders: Union[str, List[str]]) -> None:
|
||||
|
@ -405,9 +438,23 @@ def test_model(model: Model):
|
|||
Test the model endpoint by sending a simple message to the model and returning the response.
|
||||
"""
|
||||
|
||||
print("Testing model", model)
|
||||
|
||||
sanitized_model = sanitize_model(model)
|
||||
client = OpenAIWrapper(config_list=[sanitized_model])
|
||||
response = client.create(messages=[{"role": "user", "content": "2+2="}], cache_seed=None)
|
||||
response = client.create(
|
||||
messages=[
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant that can add numbers. ONLY RETURN THE RESULT.",
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": "2+2=",
|
||||
},
|
||||
],
|
||||
cache_seed=None,
|
||||
)
|
||||
return response.choices[0].message.content
|
||||
|
||||
|
||||
|
@ -426,7 +473,11 @@ def load_code_execution_config(code_execution_type: CodeExecutionConfigTypes, wo
|
|||
if code_execution_type == CodeExecutionConfigTypes.local:
|
||||
executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
|
||||
elif code_execution_type == CodeExecutionConfigTypes.docker:
|
||||
executor = DockerCommandLineCodeExecutor(work_dir=work_dir)
|
||||
try:
|
||||
executor = DockerCommandLineCodeExecutor(work_dir=work_dir)
|
||||
except Exception as e:
|
||||
logger.error(f"Error initializing Docker executor: {e}")
|
||||
return False
|
||||
elif code_execution_type == CodeExecutionConfigTypes.none:
|
||||
return False
|
||||
else:
|
||||
|
@ -462,3 +513,61 @@ def summarize_chat_history(task: str, messages: List[Dict[str, str]], client: Mo
|
|||
]
|
||||
response = client.create(messages=summarization_prompt, cache_seed=None)
|
||||
return response.choices[0].message.content
|
||||
|
||||
|
||||
def get_autogen_log(db_path="logs.db"):
|
||||
"""
|
||||
Fetches data the autogen logs database.
|
||||
Args:
|
||||
dbname (str): Name of the database file. Defaults to "logs.db".
|
||||
table (str): Name of the table to query. Defaults to "chat_completions".
|
||||
|
||||
Returns:
|
||||
list: A list of dictionaries, where each dictionary represents a row from the table.
|
||||
"""
|
||||
import json
|
||||
import sqlite3
|
||||
|
||||
con = sqlite3.connect(db_path)
|
||||
query = """
|
||||
SELECT
|
||||
chat_completions.*,
|
||||
agents.name AS agent_name
|
||||
FROM
|
||||
chat_completions
|
||||
JOIN
|
||||
agents ON chat_completions.wrapper_id = agents.wrapper_id
|
||||
"""
|
||||
cursor = con.execute(query)
|
||||
rows = cursor.fetchall()
|
||||
column_names = [description[0] for description in cursor.description]
|
||||
data = [dict(zip(column_names, row)) for row in rows]
|
||||
for row in data:
|
||||
response = json.loads(row["response"])
|
||||
print(response)
|
||||
total_tokens = response.get("usage", {}).get("total_tokens", 0)
|
||||
row["total_tokens"] = total_tokens
|
||||
con.close()
|
||||
return data
|
||||
|
||||
|
||||
def find_key_value(d, target_key):
|
||||
"""
|
||||
Recursively search for a key in a nested dictionary and return its value.
|
||||
"""
|
||||
if d is None:
|
||||
return None
|
||||
|
||||
if isinstance(d, dict):
|
||||
if target_key in d:
|
||||
return d[target_key]
|
||||
for k in d:
|
||||
item = find_key_value(d[k], target_key)
|
||||
if item is not None:
|
||||
return item
|
||||
elif isinstance(d, list):
|
||||
for i in d:
|
||||
item = find_key_value(i, target_key)
|
||||
if item is not None:
|
||||
return item
|
||||
return None
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
VERSION = "0.0.56rc9"
|
||||
VERSION = "0.1.4"
|
||||
__version__ = VERSION
|
||||
APP_NAME = "autogenstudio"
|
||||
|
|
|
@ -4,7 +4,7 @@ import queue
|
|||
import threading
|
||||
import traceback
|
||||
from contextlib import asynccontextmanager
|
||||
from typing import Any
|
||||
from typing import Any, Union
|
||||
|
||||
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
@ -16,9 +16,11 @@ from ..chatmanager import AutoGenChatManager, WebSocketConnectionManager
|
|||
from ..database import workflow_from_id
|
||||
from ..database.dbmanager import DBManager
|
||||
from ..datamodel import Agent, Message, Model, Response, Session, Skill, Workflow
|
||||
from ..profiler import Profiler
|
||||
from ..utils import check_and_cast_datetime_fields, init_app_folders, md5_hash, test_model
|
||||
from ..version import VERSION
|
||||
|
||||
profiler = Profiler()
|
||||
managers = {"chat": None} # manage calls to autogen
|
||||
# Create thread-safe queue for messages between api thread and autogen threads
|
||||
message_queue = queue.Queue()
|
||||
|
@ -92,8 +94,15 @@ app.add_middleware(
|
|||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
|
||||
api = FastAPI(root_path="/api")
|
||||
show_docs = os.environ.get("AUTOGENSTUDIO_API_DOCS", "False").lower() == "true"
|
||||
docs_url = "/docs" if show_docs else None
|
||||
api = FastAPI(
|
||||
root_path="/api",
|
||||
title="AutoGen Studio API",
|
||||
version=VERSION,
|
||||
docs_url=docs_url,
|
||||
description="AutoGen Studio is a low-code tool for building and testing multi-agent workflows using AutoGen.",
|
||||
)
|
||||
# mount an api route such that the main route serves the ui and the /api
|
||||
app.mount("/api", api)
|
||||
|
||||
|
@ -293,6 +302,19 @@ async def get_workflow(workflow_id: int, user_id: str):
|
|||
return list_entity(Workflow, filters=filters)
|
||||
|
||||
|
||||
@api.get("/workflows/export/{workflow_id}")
|
||||
async def export_workflow(workflow_id: int, user_id: str):
|
||||
"""Export a user workflow"""
|
||||
response = Response(message="Workflow exported successfully", status=True, data=None)
|
||||
try:
|
||||
workflow_details = workflow_from_id(workflow_id, dbmanager=dbmanager)
|
||||
response.data = workflow_details
|
||||
except Exception as ex_error:
|
||||
response.message = "Error occurred while exporting workflow: " + str(ex_error)
|
||||
response.status = False
|
||||
return response.model_dump(mode="json")
|
||||
|
||||
|
||||
@api.post("/workflows")
|
||||
async def create_workflow(workflow: Workflow):
|
||||
"""Create a new workflow"""
|
||||
|
@ -317,6 +339,19 @@ async def link_workflow_agent(workflow_id: int, agent_id: int, agent_type: str):
|
|||
)
|
||||
|
||||
|
||||
@api.post("/workflows/link/agent/{workflow_id}/{agent_id}/{agent_type}/{sequence_id}")
|
||||
async def link_workflow_agent_sequence(workflow_id: int, agent_id: int, agent_type: str, sequence_id: int):
|
||||
"""Link an agent to a workflow"""
|
||||
print("Sequence ID: ", sequence_id)
|
||||
return dbmanager.link(
|
||||
link_type="workflow_agent",
|
||||
primary_id=workflow_id,
|
||||
secondary_id=agent_id,
|
||||
agent_type=agent_type,
|
||||
sequence_id=sequence_id,
|
||||
)
|
||||
|
||||
|
||||
@api.delete("/workflows/link/agent/{workflow_id}/{agent_id}/{agent_type}")
|
||||
async def unlink_workflow_agent(workflow_id: int, agent_id: int, agent_type: str):
|
||||
"""Unlink an agent from a workflow"""
|
||||
|
@ -328,17 +363,47 @@ async def unlink_workflow_agent(workflow_id: int, agent_id: int, agent_type: str
|
|||
)
|
||||
|
||||
|
||||
@api.get("/workflows/link/agent/{workflow_id}/{agent_type}")
|
||||
async def get_linked_workflow_agents(workflow_id: int, agent_type: str):
|
||||
@api.delete("/workflows/link/agent/{workflow_id}/{agent_id}/{agent_type}/{sequence_id}")
|
||||
async def unlink_workflow_agent_sequence(workflow_id: int, agent_id: int, agent_type: str, sequence_id: int):
|
||||
"""Unlink an agent from a workflow sequence"""
|
||||
return dbmanager.unlink(
|
||||
link_type="workflow_agent",
|
||||
primary_id=workflow_id,
|
||||
secondary_id=agent_id,
|
||||
agent_type=agent_type,
|
||||
sequence_id=sequence_id,
|
||||
)
|
||||
|
||||
|
||||
@api.get("/workflows/link/agent/{workflow_id}")
|
||||
async def get_linked_workflow_agents(workflow_id: int):
|
||||
"""Get all agents linked to a workflow"""
|
||||
return dbmanager.get_linked_entities(
|
||||
link_type="workflow_agent",
|
||||
primary_id=workflow_id,
|
||||
agent_type=agent_type,
|
||||
return_json=True,
|
||||
)
|
||||
|
||||
|
||||
@api.get("/profiler/{message_id}")
|
||||
async def profile_agent_task_run(message_id: int):
|
||||
"""Profile an agent task run"""
|
||||
try:
|
||||
agent_message = dbmanager.get(Message, filters={"id": message_id}).data[0]
|
||||
|
||||
profile = profiler.profile(agent_message)
|
||||
return {
|
||||
"status": True,
|
||||
"message": "Agent task run profiled successfully",
|
||||
"data": profile,
|
||||
}
|
||||
except Exception as ex_error:
|
||||
return {
|
||||
"status": False,
|
||||
"message": "Error occurred while profiling agent task run: " + str(ex_error),
|
||||
}
|
||||
|
||||
|
||||
@api.get("/sessions")
|
||||
async def list_sessions(user_id: str):
|
||||
"""List all sessions for a user"""
|
||||
|
@ -395,7 +460,6 @@ async def run_session_workflow(message: Message, session_id: int, workflow_id: i
|
|||
response: Response = dbmanager.upsert(agent_response)
|
||||
return response.model_dump(mode="json")
|
||||
except Exception as ex_error:
|
||||
print(traceback.format_exc())
|
||||
return {
|
||||
"status": False,
|
||||
"message": "Error occurred while processing message: " + str(ex_error),
|
||||
|
|
|
@ -0,0 +1,30 @@
|
|||
# loads a fast api api endpoint with a single endpoint that takes text query and return a response
|
||||
|
||||
import json
|
||||
import os
|
||||
|
||||
from fastapi import FastAPI
|
||||
|
||||
from ..datamodel import Response
|
||||
from ..workflowmanager import WorkflowManager
|
||||
|
||||
app = FastAPI()
|
||||
workflow_file_path = os.environ.get("AUTOGENSTUDIO_WORKFLOW_FILE", None)
|
||||
|
||||
|
||||
if workflow_file_path:
|
||||
workflow_manager = WorkflowManager(workflow=workflow_file_path)
|
||||
else:
|
||||
raise ValueError("Workflow file must be specified")
|
||||
|
||||
|
||||
@app.get("/predict/{task}")
|
||||
async def predict(task: str):
|
||||
response = Response(message="Task successfully completed", status=True, data=None)
|
||||
try:
|
||||
result_message = workflow_manager.run(message=task, clear_history=False)
|
||||
response.data = result_message
|
||||
except Exception as e:
|
||||
response.message = str(e)
|
||||
response.status = False
|
||||
return response
|
|
@ -1,4 +1,6 @@
|
|||
import json
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
|
@ -7,20 +9,33 @@ import autogen
|
|||
from .datamodel import (
|
||||
Agent,
|
||||
AgentType,
|
||||
CodeExecutionConfigTypes,
|
||||
Message,
|
||||
SocketMessage,
|
||||
Workflow,
|
||||
WorkFlowSummaryMethod,
|
||||
WorkFlowType,
|
||||
)
|
||||
from .utils import (
|
||||
clear_folder,
|
||||
find_key_value,
|
||||
get_modified_files,
|
||||
get_skills_prompt,
|
||||
load_code_execution_config,
|
||||
sanitize_model,
|
||||
save_skills_to_file,
|
||||
summarize_chat_history,
|
||||
)
|
||||
from .utils import clear_folder, get_skills_from_prompt, load_code_execution_config, sanitize_model
|
||||
|
||||
|
||||
class WorkflowManager:
|
||||
class AutoWorkflowManager:
|
||||
"""
|
||||
AutoGenWorkFlowManager class to load agents from a provided configuration and run a chat between them
|
||||
WorkflowManager class to load agents from a provided configuration and run a chat between them.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workflow: Dict,
|
||||
workflow: Union[Dict, str],
|
||||
history: Optional[List[Message]] = None,
|
||||
work_dir: str = None,
|
||||
clear_work_dir: bool = True,
|
||||
|
@ -28,27 +43,74 @@ class WorkflowManager:
|
|||
connection_id: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Initializes the AutoGenFlow with agents specified in the config and optional
|
||||
message history.
|
||||
Initializes the WorkflowManager with agents specified in the config and optional message history.
|
||||
|
||||
Args:
|
||||
config: The configuration settings for the sender and receiver agents.
|
||||
history: An optional list of previous messages to populate the agents' history.
|
||||
|
||||
workflow (Union[Dict, str]): The workflow configuration. This can be a dictionary or a string which is a path to a JSON file.
|
||||
history (Optional[List[Message]]): The message history.
|
||||
work_dir (str): The working directory.
|
||||
clear_work_dir (bool): If set to True, clears the working directory.
|
||||
send_message_function (Optional[callable]): The function to send messages.
|
||||
connection_id (Optional[str]): The connection identifier.
|
||||
"""
|
||||
if isinstance(workflow, str):
|
||||
if os.path.isfile(workflow):
|
||||
with open(workflow, "r") as file:
|
||||
self.workflow = json.load(file)
|
||||
else:
|
||||
raise FileNotFoundError(f"The file {workflow} does not exist.")
|
||||
elif isinstance(workflow, dict):
|
||||
self.workflow = workflow
|
||||
else:
|
||||
raise ValueError("The 'workflow' parameter should be either a dictionary or a valid JSON file path")
|
||||
|
||||
# TODO - improved typing for workflow
|
||||
self.workflow_skills = []
|
||||
self.send_message_function = send_message_function
|
||||
self.connection_id = connection_id
|
||||
self.work_dir = work_dir or "work_dir"
|
||||
self.code_executor_pool = {
|
||||
CodeExecutionConfigTypes.local: load_code_execution_config(
|
||||
CodeExecutionConfigTypes.local, work_dir=self.work_dir
|
||||
),
|
||||
CodeExecutionConfigTypes.docker: load_code_execution_config(
|
||||
CodeExecutionConfigTypes.docker, work_dir=self.work_dir
|
||||
),
|
||||
}
|
||||
if clear_work_dir:
|
||||
clear_folder(self.work_dir)
|
||||
self.workflow = workflow
|
||||
self.sender = self.load(workflow.get("sender"))
|
||||
self.receiver = self.load(workflow.get("receiver"))
|
||||
self.agent_history = []
|
||||
self.history = history or []
|
||||
self.sender = None
|
||||
self.receiver = None
|
||||
|
||||
if history:
|
||||
self._populate_history(history)
|
||||
def _run_workflow(self, message: str, history: Optional[List[Message]] = None, clear_history: bool = False) -> None:
|
||||
"""
|
||||
Runs the workflow based on the provided configuration.
|
||||
|
||||
Args:
|
||||
message: The initial message to start the chat.
|
||||
history: A list of messages to populate the agents' history.
|
||||
clear_history: If set to True, clears the chat history before initiating.
|
||||
|
||||
"""
|
||||
for agent in self.workflow.get("agents", []):
|
||||
if agent.get("link").get("agent_type") == "sender":
|
||||
self.sender = self.load(agent.get("agent"))
|
||||
elif agent.get("link").get("agent_type") == "receiver":
|
||||
self.receiver = self.load(agent.get("agent"))
|
||||
if self.sender and self.receiver:
|
||||
# save all agent skills to skills.py
|
||||
save_skills_to_file(self.workflow_skills, self.work_dir)
|
||||
if history:
|
||||
self._populate_history(history)
|
||||
self.sender.initiate_chat(
|
||||
self.receiver,
|
||||
message=message,
|
||||
clear_history=clear_history,
|
||||
)
|
||||
else:
|
||||
raise ValueError("Sender and receiver agents are not defined in the workflow configuration.")
|
||||
|
||||
def _serialize_agent(
|
||||
self,
|
||||
|
@ -184,13 +246,13 @@ class WorkflowManager:
|
|||
config_list.append(sanitized_llm)
|
||||
agent.config.llm_config.config_list = config_list
|
||||
|
||||
agent.config.code_execution_config = load_code_execution_config(
|
||||
agent.config.code_execution_config, work_dir=self.work_dir
|
||||
)
|
||||
agent.config.code_execution_config = self.code_executor_pool.get(agent.config.code_execution_config, False)
|
||||
|
||||
if skills:
|
||||
for skill in skills:
|
||||
self.workflow_skills.append(skill)
|
||||
skills_prompt = ""
|
||||
skills_prompt = get_skills_from_prompt(skills, self.work_dir)
|
||||
skills_prompt = get_skills_prompt(skills, self.work_dir)
|
||||
if agent.config.system_message:
|
||||
agent.config.system_message = agent.config.system_message + "\n\n" + skills_prompt
|
||||
else:
|
||||
|
@ -241,7 +303,81 @@ class WorkflowManager:
|
|||
raise ValueError(f"Unknown agent type: {agent.type}")
|
||||
return agent
|
||||
|
||||
def run(self, message: str, clear_history: bool = False) -> None:
|
||||
def _generate_output(
|
||||
self,
|
||||
message_text: str,
|
||||
summary_method: str,
|
||||
) -> str:
|
||||
"""
|
||||
Generates the output response based on the workflow configuration and agent history.
|
||||
|
||||
:param message_text: The text of the incoming message.
|
||||
:param flow: An instance of `WorkflowManager`.
|
||||
:param flow_config: An instance of `AgentWorkFlowConfig`.
|
||||
:return: The output response as a string.
|
||||
"""
|
||||
|
||||
output = ""
|
||||
if summary_method == WorkFlowSummaryMethod.last:
|
||||
(self.agent_history)
|
||||
last_message = self.agent_history[-1]["message"]["content"] if self.agent_history else ""
|
||||
output = last_message
|
||||
elif summary_method == WorkFlowSummaryMethod.llm:
|
||||
client = self.receiver.client
|
||||
if self.connection_id:
|
||||
status_message = SocketMessage(
|
||||
type="agent_status",
|
||||
data={
|
||||
"status": "summarizing",
|
||||
"message": "Summarizing agent dialogue",
|
||||
},
|
||||
connection_id=self.connection_id,
|
||||
)
|
||||
self.send_message_function(status_message.model_dump(mode="json"))
|
||||
output = summarize_chat_history(
|
||||
task=message_text,
|
||||
messages=self.agent_history,
|
||||
client=client,
|
||||
)
|
||||
|
||||
elif summary_method == "none":
|
||||
output = ""
|
||||
return output
|
||||
|
||||
def _get_agent_usage(self, agent: autogen.Agent):
|
||||
final_usage = []
|
||||
default_usage = {"total_cost": 0, "total_tokens": 0}
|
||||
agent_usage = agent.client.total_usage_summary if agent.client else default_usage
|
||||
agent_usage = {
|
||||
"agent": agent.name,
|
||||
"total_cost": find_key_value(agent_usage, "total_cost") or 0,
|
||||
"total_tokens": find_key_value(agent_usage, "total_tokens") or 0,
|
||||
}
|
||||
final_usage.append(agent_usage)
|
||||
|
||||
if type(agent) == ExtendedGroupChatManager:
|
||||
print("groupchat found, processing", len(agent.groupchat.agents))
|
||||
for agent in agent.groupchat.agents:
|
||||
agent_usage = agent.client.total_usage_summary if agent.client else default_usage or default_usage
|
||||
agent_usage = {
|
||||
"agent": agent.name,
|
||||
"total_cost": find_key_value(agent_usage, "total_cost") or 0,
|
||||
"total_tokens": find_key_value(agent_usage, "total_tokens") or 0,
|
||||
}
|
||||
final_usage.append(agent_usage)
|
||||
return final_usage
|
||||
|
||||
def _get_usage_summary(self):
|
||||
sender_usage = self._get_agent_usage(self.sender)
|
||||
receiver_usage = self._get_agent_usage(self.receiver)
|
||||
|
||||
all_usage = []
|
||||
all_usage.extend(sender_usage)
|
||||
all_usage.extend(receiver_usage)
|
||||
# all_usage = [sender_usage, receiver_usage]
|
||||
return all_usage
|
||||
|
||||
def run(self, message: str, history: Optional[List[Message]] = None, clear_history: bool = False) -> Message:
|
||||
"""
|
||||
Initiates a chat between the sender and receiver agents with an initial message
|
||||
and an option to clear the history.
|
||||
|
@ -250,11 +386,262 @@ class WorkflowManager:
|
|||
message: The initial message to start the chat.
|
||||
clear_history: If set to True, clears the chat history before initiating.
|
||||
"""
|
||||
self.sender.initiate_chat(
|
||||
self.receiver,
|
||||
message=message,
|
||||
clear_history=clear_history,
|
||||
|
||||
start_time = time.time()
|
||||
self._run_workflow(message=message, history=history, clear_history=clear_history)
|
||||
end_time = time.time()
|
||||
|
||||
output = self._generate_output(message, self.workflow.get("summary_method", "last"))
|
||||
|
||||
usage = self._get_usage_summary()
|
||||
# print("usage", usage)
|
||||
|
||||
result_message = Message(
|
||||
content=output,
|
||||
role="assistant",
|
||||
meta={
|
||||
"messages": self.agent_history,
|
||||
"summary_method": self.workflow.get("summary_method", "last"),
|
||||
"time": end_time - start_time,
|
||||
"files": get_modified_files(start_time, end_time, source_dir=self.work_dir),
|
||||
"usage": usage,
|
||||
},
|
||||
)
|
||||
return result_message
|
||||
|
||||
|
||||
class SequentialWorkflowManager:
|
||||
"""
|
||||
WorkflowManager class to load agents from a provided configuration and run a chat between them sequentially.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workflow: Union[Dict, str],
|
||||
history: Optional[List[Message]] = None,
|
||||
work_dir: str = None,
|
||||
clear_work_dir: bool = True,
|
||||
send_message_function: Optional[callable] = None,
|
||||
connection_id: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Initializes the WorkflowManager with agents specified in the config and optional message history.
|
||||
|
||||
Args:
|
||||
workflow (Union[Dict, str]): The workflow configuration. This can be a dictionary or a string which is a path to a JSON file.
|
||||
history (Optional[List[Message]]): The message history.
|
||||
work_dir (str): The working directory.
|
||||
clear_work_dir (bool): If set to True, clears the working directory.
|
||||
send_message_function (Optional[callable]): The function to send messages.
|
||||
connection_id (Optional[str]): The connection identifier.
|
||||
"""
|
||||
if isinstance(workflow, str):
|
||||
if os.path.isfile(workflow):
|
||||
with open(workflow, "r") as file:
|
||||
self.workflow = json.load(file)
|
||||
else:
|
||||
raise FileNotFoundError(f"The file {workflow} does not exist.")
|
||||
elif isinstance(workflow, dict):
|
||||
self.workflow = workflow
|
||||
else:
|
||||
raise ValueError("The 'workflow' parameter should be either a dictionary or a valid JSON file path")
|
||||
|
||||
# TODO - improved typing for workflow
|
||||
self.send_message_function = send_message_function
|
||||
self.connection_id = connection_id
|
||||
self.work_dir = work_dir or "work_dir"
|
||||
if clear_work_dir:
|
||||
clear_folder(self.work_dir)
|
||||
self.agent_history = []
|
||||
self.history = history or []
|
||||
self.sender = None
|
||||
self.receiver = None
|
||||
self.model_client = None
|
||||
|
||||
def _run_workflow(self, message: str, history: Optional[List[Message]] = None, clear_history: bool = False) -> None:
|
||||
"""
|
||||
Runs the workflow based on the provided configuration.
|
||||
|
||||
Args:
|
||||
message: The initial message to start the chat.
|
||||
history: A list of messages to populate the agents' history.
|
||||
clear_history: If set to True, clears the chat history before initiating.
|
||||
|
||||
"""
|
||||
user_proxy = {
|
||||
"config": {
|
||||
"name": "user_proxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"code_execution_config": "local",
|
||||
"default_auto_reply": "TERMINATE",
|
||||
"description": "User Proxy Agent Configuration",
|
||||
"llm_config": False,
|
||||
"type": "userproxy",
|
||||
}
|
||||
}
|
||||
sequential_history = []
|
||||
for i, agent in enumerate(self.workflow.get("agents", [])):
|
||||
workflow = Workflow(
|
||||
name="agent workflow", type=WorkFlowType.autonomous, summary_method=WorkFlowSummaryMethod.llm
|
||||
)
|
||||
workflow = workflow.model_dump(mode="json")
|
||||
agent = agent.get("agent")
|
||||
workflow["agents"] = [
|
||||
{"agent": user_proxy, "link": {"agent_type": "sender"}},
|
||||
{"agent": agent, "link": {"agent_type": "receiver"}},
|
||||
]
|
||||
|
||||
auto_workflow = AutoWorkflowManager(
|
||||
workflow=workflow,
|
||||
history=history,
|
||||
work_dir=self.work_dir,
|
||||
clear_work_dir=True,
|
||||
send_message_function=self.send_message_function,
|
||||
connection_id=self.connection_id,
|
||||
)
|
||||
task_prompt = (
|
||||
f"""
|
||||
Your primary instructions are as follows:
|
||||
{agent.get("task_instruction")}
|
||||
Context for addressing your task is below:
|
||||
=======
|
||||
{str(sequential_history)}
|
||||
=======
|
||||
Now address your task:
|
||||
"""
|
||||
if i > 0
|
||||
else message
|
||||
)
|
||||
result = auto_workflow.run(message=task_prompt, clear_history=clear_history)
|
||||
sequential_history.append(result.content)
|
||||
self.model_client = auto_workflow.receiver.client
|
||||
print(f"======== end of sequence === {i}============")
|
||||
self.agent_history.extend(result.meta.get("messages", []))
|
||||
|
||||
def _generate_output(
|
||||
self,
|
||||
message_text: str,
|
||||
summary_method: str,
|
||||
) -> str:
|
||||
"""
|
||||
Generates the output response based on the workflow configuration and agent history.
|
||||
|
||||
:param message_text: The text of the incoming message.
|
||||
:param flow: An instance of `WorkflowManager`.
|
||||
:param flow_config: An instance of `AgentWorkFlowConfig`.
|
||||
:return: The output response as a string.
|
||||
"""
|
||||
|
||||
output = ""
|
||||
if summary_method == WorkFlowSummaryMethod.last:
|
||||
(self.agent_history)
|
||||
last_message = self.agent_history[-1]["message"]["content"] if self.agent_history else ""
|
||||
output = last_message
|
||||
elif summary_method == WorkFlowSummaryMethod.llm:
|
||||
if self.connection_id:
|
||||
status_message = SocketMessage(
|
||||
type="agent_status",
|
||||
data={
|
||||
"status": "summarizing",
|
||||
"message": "Summarizing agent dialogue",
|
||||
},
|
||||
connection_id=self.connection_id,
|
||||
)
|
||||
self.send_message_function(status_message.model_dump(mode="json"))
|
||||
output = summarize_chat_history(
|
||||
task=message_text,
|
||||
messages=self.agent_history,
|
||||
client=self.model_client,
|
||||
)
|
||||
|
||||
elif summary_method == "none":
|
||||
output = ""
|
||||
return output
|
||||
|
||||
def run(self, message: str, history: Optional[List[Message]] = None, clear_history: bool = False) -> Message:
|
||||
"""
|
||||
Initiates a chat between the sender and receiver agents with an initial message
|
||||
and an option to clear the history.
|
||||
|
||||
Args:
|
||||
message: The initial message to start the chat.
|
||||
clear_history: If set to True, clears the chat history before initiating.
|
||||
"""
|
||||
|
||||
start_time = time.time()
|
||||
self._run_workflow(message=message, history=history, clear_history=clear_history)
|
||||
end_time = time.time()
|
||||
output = self._generate_output(message, self.workflow.get("summary_method", "last"))
|
||||
|
||||
result_message = Message(
|
||||
content=output,
|
||||
role="assistant",
|
||||
meta={
|
||||
"messages": self.agent_history,
|
||||
"summary_method": self.workflow.get("summary_method", "last"),
|
||||
"time": end_time - start_time,
|
||||
"files": get_modified_files(start_time, end_time, source_dir=self.work_dir),
|
||||
"task": message,
|
||||
},
|
||||
)
|
||||
return result_message
|
||||
|
||||
|
||||
class WorkflowManager:
|
||||
"""
|
||||
WorkflowManager class to load agents from a provided configuration and run a chat between them.
|
||||
"""
|
||||
|
||||
def __new__(
|
||||
self,
|
||||
workflow: Union[Dict, str],
|
||||
history: Optional[List[Message]] = None,
|
||||
work_dir: str = None,
|
||||
clear_work_dir: bool = True,
|
||||
send_message_function: Optional[callable] = None,
|
||||
connection_id: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Initializes the WorkflowManager with agents specified in the config and optional message history.
|
||||
|
||||
Args:
|
||||
workflow (Union[Dict, str]): The workflow configuration. This can be a dictionary or a string which is a path to a JSON file.
|
||||
history (Optional[List[Message]]): The message history.
|
||||
work_dir (str): The working directory.
|
||||
clear_work_dir (bool): If set to True, clears the working directory.
|
||||
send_message_function (Optional[callable]): The function to send messages.
|
||||
connection_id (Optional[str]): The connection identifier.
|
||||
"""
|
||||
if isinstance(workflow, str):
|
||||
if os.path.isfile(workflow):
|
||||
with open(workflow, "r") as file:
|
||||
self.workflow = json.load(file)
|
||||
else:
|
||||
raise FileNotFoundError(f"The file {workflow} does not exist.")
|
||||
elif isinstance(workflow, dict):
|
||||
self.workflow = workflow
|
||||
else:
|
||||
raise ValueError("The 'workflow' parameter should be either a dictionary or a valid JSON file path")
|
||||
|
||||
if self.workflow.get("type") == WorkFlowType.autonomous.value:
|
||||
return AutoWorkflowManager(
|
||||
workflow=workflow,
|
||||
history=history,
|
||||
work_dir=work_dir,
|
||||
clear_work_dir=clear_work_dir,
|
||||
send_message_function=send_message_function,
|
||||
connection_id=connection_id,
|
||||
)
|
||||
elif self.workflow.get("type") == WorkFlowType.sequential.value:
|
||||
return SequentialWorkflowManager(
|
||||
workflow=workflow,
|
||||
history=history,
|
||||
work_dir=work_dir,
|
||||
clear_work_dir=clear_work_dir,
|
||||
send_message_function=send_message_function,
|
||||
connection_id=connection_id,
|
||||
)
|
||||
|
||||
|
||||
class ExtendedConversableAgent(autogen.ConversableAgent):
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import type { GatsbyConfig } from "gatsby";
|
||||
import fs from 'fs';
|
||||
import fs from "fs";
|
||||
|
||||
const envFile = `.env.${process.env.NODE_ENV}`;
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
},
|
||||
"dependencies": {
|
||||
"@ant-design/charts": "^1.3.6",
|
||||
"@ant-design/plots": "^2.2.2",
|
||||
"@headlessui/react": "^1.7.16",
|
||||
"@heroicons/react": "^2.0.18",
|
||||
"@mdx-js/mdx": "^1.6.22",
|
||||
|
|
|
@ -751,7 +751,7 @@ export const PdfViewer = ({ url }: { url: string }) => {
|
|||
data={url}
|
||||
type="application/pdf"
|
||||
width="100%"
|
||||
height="450px"
|
||||
style={{ height: "calc(90vh - 200px)" }}
|
||||
>
|
||||
<p>PDF cannot be displayed.</p>
|
||||
</object>
|
||||
|
|
|
@ -9,6 +9,8 @@ export interface IMessage {
|
|||
session_id?: number;
|
||||
connection_id?: string;
|
||||
workflow_id?: number;
|
||||
meta?: any;
|
||||
id?: number;
|
||||
}
|
||||
|
||||
export interface IStatus {
|
||||
|
@ -21,7 +23,7 @@ export interface IChatMessage {
|
|||
text: string;
|
||||
sender: "user" | "bot";
|
||||
meta?: any;
|
||||
msg_id: string;
|
||||
id?: number;
|
||||
}
|
||||
|
||||
export interface ILLMConfig {
|
||||
|
@ -63,9 +65,9 @@ export interface IAgent {
|
|||
export interface IWorkflow {
|
||||
name: string;
|
||||
description: string;
|
||||
sender: IAgent;
|
||||
receiver: IAgent;
|
||||
type: "twoagents" | "groupchat";
|
||||
sender?: IAgent;
|
||||
receiver?: IAgent;
|
||||
type?: "autonomous" | "sequential";
|
||||
created_at?: string;
|
||||
updated_at?: string;
|
||||
summary_method?: "none" | "last" | "llm";
|
||||
|
@ -78,7 +80,7 @@ export interface IModelConfig {
|
|||
api_key?: string;
|
||||
api_version?: string;
|
||||
base_url?: string;
|
||||
api_type?: "open_ai" | "azure" | "google";
|
||||
api_type?: "open_ai" | "azure" | "google" | "anthropic" | "mistral";
|
||||
user_id?: string;
|
||||
created_at?: string;
|
||||
updated_at?: string;
|
||||
|
@ -115,6 +117,8 @@ export interface IGalleryItem {
|
|||
export interface ISkill {
|
||||
name: string;
|
||||
content: string;
|
||||
secrets?: any[];
|
||||
libraries?: string[];
|
||||
id?: number;
|
||||
description?: string;
|
||||
user_id?: string;
|
||||
|
|
|
@ -266,6 +266,18 @@ export const sampleModelConfig = (modelType: string = "open_ai") => {
|
|||
description: "Google Gemini Model model",
|
||||
};
|
||||
|
||||
const anthropicConfig: IModelConfig = {
|
||||
model: "claude-3-5-sonnet-20240620",
|
||||
api_type: "anthropic",
|
||||
description: "Claude 3.5 Sonnet model",
|
||||
};
|
||||
|
||||
const mistralConfig: IModelConfig = {
|
||||
model: "mistral",
|
||||
api_type: "mistral",
|
||||
description: "Mistral model",
|
||||
};
|
||||
|
||||
switch (modelType) {
|
||||
case "open_ai":
|
||||
return openaiConfig;
|
||||
|
@ -273,6 +285,10 @@ export const sampleModelConfig = (modelType: string = "open_ai") => {
|
|||
return azureConfig;
|
||||
case "google":
|
||||
return googleConfig;
|
||||
case "anthropic":
|
||||
return anthropicConfig;
|
||||
case "mistral":
|
||||
return mistralConfig;
|
||||
default:
|
||||
return openaiConfig;
|
||||
}
|
||||
|
@ -286,13 +302,36 @@ export const getRandomIntFromDateAndSalt = (salt: number = 43444) => {
|
|||
return randomInt;
|
||||
};
|
||||
|
||||
export const getSampleWorkflow = (workflow_type: string = "autonomous") => {
|
||||
const autonomousWorkflow: IWorkflow = {
|
||||
name: "Default Chat Workflow",
|
||||
description: "Autonomous Workflow",
|
||||
type: "autonomous",
|
||||
summary_method: "llm",
|
||||
};
|
||||
const sequentialWorkflow: IWorkflow = {
|
||||
name: "Default Sequential Workflow",
|
||||
description: "Sequential Workflow",
|
||||
type: "sequential",
|
||||
summary_method: "llm",
|
||||
};
|
||||
|
||||
if (workflow_type === "autonomous") {
|
||||
return autonomousWorkflow;
|
||||
} else if (workflow_type === "sequential") {
|
||||
return sequentialWorkflow;
|
||||
} else {
|
||||
return autonomousWorkflow;
|
||||
}
|
||||
};
|
||||
|
||||
export const sampleAgentConfig = (agent_type: string = "assistant") => {
|
||||
const llm_config: ILLMConfig = {
|
||||
config_list: [],
|
||||
temperature: 0.1,
|
||||
timeout: 600,
|
||||
cache_seed: null,
|
||||
max_tokens: 1000,
|
||||
max_tokens: 4000,
|
||||
};
|
||||
|
||||
const userProxyConfig: IAgentConfig = {
|
||||
|
@ -357,90 +396,6 @@ export const sampleAgentConfig = (agent_type: string = "assistant") => {
|
|||
}
|
||||
};
|
||||
|
||||
export const sampleWorkflowConfig = (type = "twoagents") => {
|
||||
const llm_model_config: IModelConfig[] = [];
|
||||
|
||||
const llm_config: ILLMConfig = {
|
||||
config_list: llm_model_config,
|
||||
temperature: 0.1,
|
||||
timeout: 600,
|
||||
cache_seed: null,
|
||||
max_tokens: 1000,
|
||||
};
|
||||
|
||||
const userProxyConfig: IAgentConfig = {
|
||||
name: "userproxy",
|
||||
human_input_mode: "NEVER",
|
||||
max_consecutive_auto_reply: 15,
|
||||
system_message: "You are a helpful assistant.",
|
||||
default_auto_reply: "TERMINATE",
|
||||
llm_config: false,
|
||||
code_execution_config: "local",
|
||||
};
|
||||
const userProxyFlowSpec: IAgent = {
|
||||
type: "userproxy",
|
||||
config: userProxyConfig,
|
||||
};
|
||||
|
||||
const assistantConfig: IAgentConfig = {
|
||||
name: "primary_assistant",
|
||||
llm_config: llm_config,
|
||||
human_input_mode: "NEVER",
|
||||
max_consecutive_auto_reply: 8,
|
||||
code_execution_config: "none",
|
||||
system_message:
|
||||
"You are a helpful AI assistant. Solve tasks using your coding and language skills. In the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute. 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself. 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly. Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill. When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user. If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user. If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible. Reply 'TERMINATE' in the end when everything is done.",
|
||||
};
|
||||
|
||||
const assistantFlowSpec: IAgent = {
|
||||
type: "assistant",
|
||||
config: assistantConfig,
|
||||
};
|
||||
|
||||
const workFlowConfig: IWorkflow = {
|
||||
name: "Default Agent Workflow",
|
||||
description: "Default Agent Workflow",
|
||||
sender: userProxyFlowSpec,
|
||||
receiver: assistantFlowSpec,
|
||||
type: "twoagents",
|
||||
};
|
||||
|
||||
const groupChatAssistantConfig = Object.assign(
|
||||
{
|
||||
admin_name: "groupchat_assistant",
|
||||
messages: [],
|
||||
max_round: 10,
|
||||
speaker_selection_method: "auto",
|
||||
allow_repeat_speaker: false,
|
||||
description: "Group Chat Assistant",
|
||||
},
|
||||
assistantConfig
|
||||
);
|
||||
groupChatAssistantConfig.name = "groupchat_assistant";
|
||||
groupChatAssistantConfig.system_message =
|
||||
"You are a helpful assistant skilled at cordinating a group of other assistants to solve a task. ";
|
||||
|
||||
const groupChatFlowSpec: IAgent = {
|
||||
type: "groupchat",
|
||||
config: groupChatAssistantConfig,
|
||||
};
|
||||
|
||||
const groupChatWorkFlowConfig: IWorkflow = {
|
||||
name: "Default Group Workflow",
|
||||
description: "Default Group Workflow",
|
||||
sender: userProxyFlowSpec,
|
||||
receiver: groupChatFlowSpec,
|
||||
type: "groupchat",
|
||||
};
|
||||
|
||||
if (type === "twoagents") {
|
||||
return workFlowConfig;
|
||||
} else if (type === "groupchat") {
|
||||
return groupChatWorkFlowConfig;
|
||||
}
|
||||
return workFlowConfig;
|
||||
};
|
||||
|
||||
export const getSampleSkill = () => {
|
||||
const content = `
|
||||
from typing import List
|
||||
|
@ -495,7 +450,7 @@ def generate_and_save_images(query: str, image_size: str = "1024x1024") -> List[
|
|||
`;
|
||||
|
||||
const skill: ISkill = {
|
||||
name: "generate_images",
|
||||
name: "generate_and_save_images",
|
||||
description: "Generate and save images based on a user's query.",
|
||||
content: content,
|
||||
};
|
||||
|
@ -612,7 +567,7 @@ export const fetchVersion = () => {
|
|||
*/
|
||||
export const sanitizeConfig = (
|
||||
data: any,
|
||||
keys: string[] = ["api_key", "id", "created_at", "updated_at"]
|
||||
keys: string[] = ["api_key", "id", "created_at", "updated_at", "secrets"]
|
||||
): any => {
|
||||
if (Array.isArray(data)) {
|
||||
return data.map((item) => sanitizeConfig(item, keys));
|
||||
|
|
|
@ -141,13 +141,9 @@ const AgentsView = ({}: any) => {
|
|||
icon: DocumentDuplicateIcon,
|
||||
onClick: (e: any) => {
|
||||
e.stopPropagation();
|
||||
let newAgent = { ...agent };
|
||||
let newAgent = { ...sanitizeConfig(agent) };
|
||||
newAgent.config.name = `${agent.config.name}_copy`;
|
||||
newAgent.user_id = user?.email;
|
||||
newAgent.updated_at = new Date().toISOString();
|
||||
if (newAgent.id) {
|
||||
delete newAgent.id;
|
||||
}
|
||||
console.log("newAgent", newAgent);
|
||||
setNewAgent(newAgent);
|
||||
setShowNewAgentModal(true);
|
||||
},
|
||||
|
@ -187,7 +183,7 @@ const AgentsView = ({}: any) => {
|
|||
aria-hidden="true"
|
||||
className="my-2 break-words"
|
||||
>
|
||||
{" "}
|
||||
<div className="text-xs mb-2">{agent.type}</div>{" "}
|
||||
{truncateText(agent.config.description || "", 70)}
|
||||
</div>
|
||||
<div
|
||||
|
@ -353,8 +349,11 @@ const AgentsView = ({}: any) => {
|
|||
|
||||
<div className="text-xs mb-2 pb-1 ">
|
||||
{" "}
|
||||
Configure an agent that can reused in your agent workflow{" "}
|
||||
{selectedAgent?.config.name}
|
||||
Configure an agent that can reused in your agent workflow .
|
||||
<div>
|
||||
Tip: You can also create a Group of Agents ( New Agent -
|
||||
GroupChat) which can have multiple agents in it.
|
||||
</div>
|
||||
</div>
|
||||
{agents && agents.length > 0 && (
|
||||
<div className="w-full relative">
|
||||
|
|
|
@ -6,7 +6,7 @@ import {
|
|||
PlusIcon,
|
||||
TrashIcon,
|
||||
} from "@heroicons/react/24/outline";
|
||||
import { Button, Dropdown, Input, MenuProps, Modal, message } from "antd";
|
||||
import { Dropdown, MenuProps, Modal, message } from "antd";
|
||||
import * as React from "react";
|
||||
import { IModelConfig, IStatus } from "../../types";
|
||||
import { appContext } from "../../../hooks/provider";
|
||||
|
@ -17,14 +17,7 @@ import {
|
|||
timeAgo,
|
||||
truncateText,
|
||||
} from "../../utils";
|
||||
import {
|
||||
BounceLoader,
|
||||
Card,
|
||||
CardHoverBar,
|
||||
ControlRowView,
|
||||
LoadingOverlay,
|
||||
} from "../../atoms";
|
||||
import TextArea from "antd/es/input/TextArea";
|
||||
import { BounceLoader, Card, CardHoverBar, LoadingOverlay } from "../../atoms";
|
||||
import { ModelConfigView } from "./utils/modelconfig";
|
||||
|
||||
const ModelsView = ({}: any) => {
|
||||
|
@ -175,13 +168,8 @@ const ModelsView = ({}: any) => {
|
|||
icon: DocumentDuplicateIcon,
|
||||
onClick: (e: any) => {
|
||||
e.stopPropagation();
|
||||
let newModel = { ...model };
|
||||
newModel.model = `${model.model} Copy`;
|
||||
newModel.user_id = user?.email;
|
||||
newModel.updated_at = new Date().toISOString();
|
||||
if (newModel.id) {
|
||||
delete newModel.id;
|
||||
}
|
||||
let newModel = { ...sanitizeConfig(model) };
|
||||
newModel.model = `${model.model}_copy`;
|
||||
setNewModel(newModel);
|
||||
setShowNewModelModal(true);
|
||||
},
|
||||
|
|
|
@ -1,12 +1,15 @@
|
|||
import {
|
||||
ArrowDownTrayIcon,
|
||||
ArrowUpTrayIcon,
|
||||
CodeBracketIcon,
|
||||
CodeBracketSquareIcon,
|
||||
DocumentDuplicateIcon,
|
||||
InformationCircleIcon,
|
||||
KeyIcon,
|
||||
PlusIcon,
|
||||
TrashIcon,
|
||||
} from "@heroicons/react/24/outline";
|
||||
import { Button, Input, Modal, message, MenuProps, Dropdown } from "antd";
|
||||
import { Button, Input, Modal, message, MenuProps, Dropdown, Tabs } from "antd";
|
||||
import * as React from "react";
|
||||
import { ISkill, IStatus } from "../../types";
|
||||
import { appContext } from "../../../hooks/provider";
|
||||
|
@ -25,6 +28,8 @@ import {
|
|||
LoadingOverlay,
|
||||
MonacoEditor,
|
||||
} from "../../atoms";
|
||||
import { SkillSelector } from "./utils/selectors";
|
||||
import { SkillConfigView } from "./utils/skillconfig";
|
||||
|
||||
const SkillsView = ({}: any) => {
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
|
@ -109,38 +114,6 @@ const SkillsView = ({}: any) => {
|
|||
fetchJSON(listSkillsUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
const saveSkill = (skill: ISkill) => {
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
// const fetch;
|
||||
skill.user_id = user?.email;
|
||||
const payLoad = {
|
||||
method: "POST",
|
||||
headers: {
|
||||
Accept: "application/json",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify(skill),
|
||||
};
|
||||
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
message.success(data.message);
|
||||
const updatedSkills = [data.data].concat(skills || []);
|
||||
setSkills(updatedSkills);
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
setLoading(false);
|
||||
};
|
||||
const onError = (err: any) => {
|
||||
setError(err);
|
||||
message.error(err.message);
|
||||
setLoading(false);
|
||||
};
|
||||
fetchJSON(saveSkillsUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
React.useEffect(() => {
|
||||
if (user) {
|
||||
// console.log("fetching messages", messages);
|
||||
|
@ -173,12 +146,8 @@ const SkillsView = ({}: any) => {
|
|||
icon: DocumentDuplicateIcon,
|
||||
onClick: (e: any) => {
|
||||
e.stopPropagation();
|
||||
let newSkill = { ...skill };
|
||||
newSkill.name = `${skill.name} Copy`;
|
||||
newSkill.user_id = user?.email;
|
||||
if (newSkill.id) {
|
||||
delete newSkill.id;
|
||||
}
|
||||
let newSkill = { ...sanitizeConfig(skill) };
|
||||
newSkill.name = `${skill.name}_copy`;
|
||||
setNewSkill(newSkill);
|
||||
setShowNewSkillModal(true);
|
||||
},
|
||||
|
@ -245,6 +214,15 @@ const SkillsView = ({}: any) => {
|
|||
}) => {
|
||||
const editorRef = React.useRef<any | null>(null);
|
||||
const [localSkill, setLocalSkill] = React.useState<ISkill | null>(skill);
|
||||
|
||||
const closeModal = () => {
|
||||
setSkill(null);
|
||||
setShowSkillModal(false);
|
||||
if (handler) {
|
||||
handler(skill);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal
|
||||
title={
|
||||
|
@ -258,54 +236,14 @@ const SkillsView = ({}: any) => {
|
|||
onCancel={() => {
|
||||
setShowSkillModal(false);
|
||||
}}
|
||||
footer={[
|
||||
<Button
|
||||
key="back"
|
||||
onClick={() => {
|
||||
setShowSkillModal(false);
|
||||
}}
|
||||
>
|
||||
Cancel
|
||||
</Button>,
|
||||
<Button
|
||||
key="submit"
|
||||
type="primary"
|
||||
loading={loading}
|
||||
onClick={() => {
|
||||
setShowSkillModal(false);
|
||||
if (editorRef.current) {
|
||||
const value = editorRef.current.getValue();
|
||||
const updatedSkill = { ...localSkill, content: value };
|
||||
setSkill(updatedSkill);
|
||||
handler(updatedSkill);
|
||||
}
|
||||
}}
|
||||
>
|
||||
Save
|
||||
</Button>,
|
||||
]}
|
||||
footer={[]}
|
||||
>
|
||||
{localSkill && (
|
||||
<div style={{ minHeight: "70vh" }}>
|
||||
<div className="mb-2">
|
||||
<Input
|
||||
placeholder="Skill Name"
|
||||
value={localSkill.name}
|
||||
onChange={(e) => {
|
||||
const updatedSkill = { ...localSkill, name: e.target.value };
|
||||
setLocalSkill(updatedSkill);
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ height: "70vh" }} className="h-full mt-2 rounded">
|
||||
<MonacoEditor
|
||||
value={localSkill?.content}
|
||||
language="python"
|
||||
editorRef={editorRef}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<SkillConfigView
|
||||
skill={localSkill}
|
||||
setSkill={setLocalSkill}
|
||||
close={closeModal}
|
||||
/>
|
||||
)}
|
||||
</Modal>
|
||||
);
|
||||
|
@ -367,17 +305,17 @@ const SkillsView = ({}: any) => {
|
|||
showSkillModal={showSkillModal}
|
||||
setShowSkillModal={setShowSkillModal}
|
||||
handler={(skill: ISkill) => {
|
||||
saveSkill(skill);
|
||||
fetchSkills();
|
||||
}}
|
||||
/>
|
||||
|
||||
<SkillModal
|
||||
skill={newSkill}
|
||||
skill={newSkill || sampleSkill}
|
||||
setSkill={setNewSkill}
|
||||
showSkillModal={showNewSkillModal}
|
||||
setShowSkillModal={setShowNewSkillModal}
|
||||
handler={(skill: ISkill) => {
|
||||
saveSkill(skill);
|
||||
fetchSkills();
|
||||
}}
|
||||
/>
|
||||
|
||||
|
|
|
@ -63,7 +63,7 @@ export const AgentConfigView = ({
|
|||
const llm_config: ILLMConfig = agent?.config?.llm_config || {
|
||||
config_list: [],
|
||||
temperature: 0.1,
|
||||
max_tokens: 1000,
|
||||
max_tokens: 4000,
|
||||
};
|
||||
|
||||
const createAgent = (agent: IAgent) => {
|
||||
|
|
|
@ -0,0 +1,207 @@
|
|||
import { Button, Modal, message } from "antd";
|
||||
import * as React from "react";
|
||||
import { IWorkflow } from "../../../types";
|
||||
import { ArrowDownTrayIcon } from "@heroicons/react/24/outline";
|
||||
import {
|
||||
checkAndSanitizeInput,
|
||||
fetchJSON,
|
||||
getServerUrl,
|
||||
sanitizeConfig,
|
||||
} from "../../../utils";
|
||||
import { appContext } from "../../../../hooks/provider";
|
||||
import { CodeBlock } from "../../../atoms";
|
||||
|
||||
export const ExportWorkflowModal = ({
|
||||
workflow,
|
||||
show,
|
||||
setShow,
|
||||
}: {
|
||||
workflow: IWorkflow | null;
|
||||
show: boolean;
|
||||
setShow: (show: boolean) => void;
|
||||
}) => {
|
||||
const serverUrl = getServerUrl();
|
||||
const { user } = React.useContext(appContext);
|
||||
|
||||
const [error, setError] = React.useState<any>(null);
|
||||
const [loading, setLoading] = React.useState<boolean>(false);
|
||||
const [workflowDetails, setWorkflowDetails] = React.useState<any>(null);
|
||||
|
||||
const getWorkflowCode = (workflow: IWorkflow) => {
|
||||
const workflowCode = `from autogenstudio import WorkflowManager
|
||||
# load workflow from exported json workflow file.
|
||||
workflow_manager = WorkflowManager(workflow="path/to/your/workflow_.json")
|
||||
|
||||
# run the workflow on a task
|
||||
task_query = "What is the height of the Eiffel Tower?. Dont write code, just respond to the question."
|
||||
workflow_manager.run(message=task_query)`;
|
||||
return workflowCode;
|
||||
};
|
||||
|
||||
const getCliWorkflowCode = (workflow: IWorkflow) => {
|
||||
const workflowCode = `autogenstudio serve --workflow=workflow.json --port=5000
|
||||
`;
|
||||
return workflowCode;
|
||||
};
|
||||
|
||||
const getGunicornWorkflowCode = (workflow: IWorkflow) => {
|
||||
const workflowCode = `gunicorn -w $((2 * $(getconf _NPROCESSORS_ONLN) + 1)) --timeout 12600 -k uvicorn.workers.UvicornWorker autogenstudio.web.app:app --bind `;
|
||||
|
||||
return workflowCode;
|
||||
};
|
||||
|
||||
const fetchWorkFlow = (workflow: IWorkflow) => {
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
// const fetch;
|
||||
const payLoad = {
|
||||
method: "GET",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
};
|
||||
const downloadWorkflowUrl = `${serverUrl}/workflows/export/${workflow.id}?user_id=${user?.email}`;
|
||||
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
setWorkflowDetails(data.data);
|
||||
console.log("workflow details", data.data);
|
||||
|
||||
const sanitized_name =
|
||||
checkAndSanitizeInput(workflow.name).sanitizedText || workflow.name;
|
||||
const file_name = `workflow_${sanitized_name}.json`;
|
||||
const workflowData = sanitizeConfig(data.data);
|
||||
const file = new Blob([JSON.stringify(workflowData)], {
|
||||
type: "application/json",
|
||||
});
|
||||
const downloadUrl = URL.createObjectURL(file);
|
||||
const a = document.createElement("a");
|
||||
a.href = downloadUrl;
|
||||
a.download = file_name;
|
||||
a.click();
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
setLoading(false);
|
||||
};
|
||||
const onError = (err: any) => {
|
||||
setError(err);
|
||||
message.error(err.message);
|
||||
setLoading(false);
|
||||
};
|
||||
fetchJSON(downloadWorkflowUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
React.useEffect(() => {
|
||||
if (workflow && workflow.id && show) {
|
||||
// fetchWorkFlow(workflow.id);
|
||||
console.log("workflow modal ... component loaded", workflow);
|
||||
}
|
||||
}, [show]);
|
||||
|
||||
return (
|
||||
<Modal
|
||||
title={
|
||||
<>
|
||||
Export Workflow
|
||||
<span className="text-accent font-normal ml-2">
|
||||
{workflow?.name}
|
||||
</span>{" "}
|
||||
</>
|
||||
}
|
||||
width={800}
|
||||
open={show}
|
||||
onOk={() => {
|
||||
setShow(false);
|
||||
}}
|
||||
onCancel={() => {
|
||||
setShow(false);
|
||||
}}
|
||||
footer={[]}
|
||||
>
|
||||
<div>
|
||||
<div>
|
||||
{" "}
|
||||
You can use the following steps to start integrating your workflow
|
||||
into your application.{" "}
|
||||
</div>
|
||||
{workflow && workflow.id && (
|
||||
<>
|
||||
<div className="flex mt-2 gap-3">
|
||||
<div>
|
||||
<div className="text-sm mt-2 mb-2 pb-1 font-bold">Step 1</div>
|
||||
<div className="mt-2 mb-2 pb-1 text-xs">
|
||||
Download your workflow as a JSON file by clicking the button
|
||||
below.
|
||||
</div>
|
||||
|
||||
<div className="text-sm mt-2 mb-2 pb-1">
|
||||
<Button
|
||||
type="primary"
|
||||
loading={loading}
|
||||
onClick={() => {
|
||||
fetchWorkFlow(workflow);
|
||||
}}
|
||||
>
|
||||
Download
|
||||
<ArrowDownTrayIcon className="h-4 w-4 inline-block ml-2 -mt-1" />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm mt-2 mb-2 pb-1 font-bold">Step 2</div>
|
||||
<div className=" mt-2 mb-2 pb-1 text-xs">
|
||||
Copy the following code snippet and paste it into your
|
||||
application to run your workflow on a task.
|
||||
</div>
|
||||
<div className="text-sm mt-2 mb-2 pb-1">
|
||||
<CodeBlock
|
||||
className="text-xs"
|
||||
code={getWorkflowCode(workflow)}
|
||||
language="python"
|
||||
wrapLines={true}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm mt-2 mb-2 pb-1 font-bold">
|
||||
Step 3 (Deploy)
|
||||
</div>
|
||||
<div className=" mt-2 mb-2 pb-1 text-xs">
|
||||
You can also deploy your workflow as an API endpoint using the
|
||||
autogenstudio python CLI.
|
||||
</div>
|
||||
|
||||
<div className="text-sm mt-2 mb-2 pb-1">
|
||||
<CodeBlock
|
||||
className="text-xs"
|
||||
code={getCliWorkflowCode(workflow)}
|
||||
language="bash"
|
||||
wrapLines={true}
|
||||
/>
|
||||
|
||||
<div className="text-xs mt-2">
|
||||
Note: this will start a endpoint on port 5000. You can change
|
||||
the port by changing the port number. You can also scale this
|
||||
using multiple workers (e.g., via an application server like
|
||||
gunicorn) or wrap it in a docker container and deploy on a
|
||||
cloud provider like Azure.
|
||||
</div>
|
||||
|
||||
<CodeBlock
|
||||
className="text-xs"
|
||||
code={getGunicornWorkflowCode(workflow)}
|
||||
language="bash"
|
||||
wrapLines={true}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</Modal>
|
||||
);
|
||||
};
|
|
@ -23,18 +23,35 @@ const ModelTypeSelector = ({
|
|||
value: "open_ai",
|
||||
description: "OpenAI or other endpoints that implement the OpenAI API",
|
||||
icon: <CpuChipIcon className="h-6 w-6 text-primary" />,
|
||||
hint: "In addition to OpenAI models, You can also use OSS models via tools like Ollama, vLLM, LMStudio etc. that provide OpenAI compatible endpoint.",
|
||||
},
|
||||
{
|
||||
label: "Azure OpenAI",
|
||||
value: "azure",
|
||||
description: "Azure OpenAI endpoint",
|
||||
icon: <CpuChipIcon className="h-6 w-6 text-primary" />,
|
||||
hint: "Azure OpenAI endpoint",
|
||||
},
|
||||
{
|
||||
label: "Gemini",
|
||||
value: "google",
|
||||
description: "Gemini",
|
||||
icon: <CpuChipIcon className="h-6 w-6 text-primary" />,
|
||||
hint: "Gemini",
|
||||
},
|
||||
{
|
||||
label: "Claude",
|
||||
value: "anthropic",
|
||||
description: "Anthropic Claude",
|
||||
icon: <CpuChipIcon className="h-6 w-6 text-primary" />,
|
||||
hint: "Anthropic Claude models",
|
||||
},
|
||||
{
|
||||
label: "Mistral",
|
||||
value: "mistral",
|
||||
description: "Mistral",
|
||||
icon: <CpuChipIcon className="h-6 w-6 text-primary" />,
|
||||
hint: "Mistral models",
|
||||
},
|
||||
];
|
||||
|
||||
|
@ -46,7 +63,7 @@ const ModelTypeSelector = ({
|
|||
return (
|
||||
<li
|
||||
onMouseEnter={() => {
|
||||
setSelectedHint(modelType.value);
|
||||
setSelectedHint(modelType.hint);
|
||||
}}
|
||||
role="listitem"
|
||||
key={"modeltype" + i}
|
||||
|
@ -78,13 +95,6 @@ const ModelTypeSelector = ({
|
|||
);
|
||||
});
|
||||
|
||||
const hints: any = {
|
||||
open_ai:
|
||||
"In addition to OpenAI models, You can also use OSS models via tools like Ollama, vLLM, LMStudio etc. that provide OpenAI compatible endpoint.",
|
||||
azure: "Azure OpenAI endpoint",
|
||||
google: "Gemini",
|
||||
};
|
||||
|
||||
const [selectedHint, setSelectedHint] = React.useState<string>("open_ai");
|
||||
|
||||
return (
|
||||
|
@ -94,7 +104,7 @@ const ModelTypeSelector = ({
|
|||
|
||||
<div className="text-xs mt-4">
|
||||
<InformationCircleIcon className="h-4 w-4 inline mr-1 -mt-1" />
|
||||
{hints[selectedHint]}
|
||||
{selectedHint}
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
|
|
|
@ -3,10 +3,10 @@ import { IAgent, IModelConfig, ISkill, IWorkflow } from "../../../types";
|
|||
import { Card } from "../../../atoms";
|
||||
import {
|
||||
fetchJSON,
|
||||
getSampleWorkflow,
|
||||
getServerUrl,
|
||||
obscureString,
|
||||
sampleAgentConfig,
|
||||
sampleWorkflowConfig,
|
||||
truncateText,
|
||||
} from "../../../utils";
|
||||
import {
|
||||
|
@ -19,6 +19,8 @@ import {
|
|||
theme,
|
||||
} from "antd";
|
||||
import {
|
||||
ArrowLongRightIcon,
|
||||
ChatBubbleLeftRightIcon,
|
||||
CodeBracketSquareIcon,
|
||||
ExclamationTriangleIcon,
|
||||
InformationCircleIcon,
|
||||
|
@ -354,7 +356,7 @@ export const AgentTypeSelector = ({
|
|||
|
||||
return (
|
||||
<>
|
||||
<div className="pb-3">Select Agent Type</div>
|
||||
<div className="pb-3 text-primary">Select Agent Type</div>
|
||||
<ul className="inline-flex gap-2">{agentTypeRows}</ul>
|
||||
</>
|
||||
);
|
||||
|
@ -370,10 +372,18 @@ export const WorkflowTypeSelector = ({
|
|||
const iconClass = "h-6 w-6 inline-block ";
|
||||
const workflowTypes = [
|
||||
{
|
||||
label: "Default",
|
||||
value: "default",
|
||||
description: <> Includes a sender and receiver. </>,
|
||||
icon: <UserCircleIcon className={iconClass} />,
|
||||
label: "Autonomous (Chat)",
|
||||
value: "autonomous",
|
||||
description:
|
||||
"Includes an initiator and receiver. The initiator is typically a user proxy agent, while the receiver could be any agent type (assistant or groupchat",
|
||||
icon: <ChatBubbleLeftRightIcon className={iconClass} />,
|
||||
},
|
||||
{
|
||||
label: "Sequential",
|
||||
value: "sequential",
|
||||
description:
|
||||
" Includes a list of agents in a given order. Each agent should have an nstruction and will summarize and pass on the results of their work to the next agent",
|
||||
icon: <ArrowLongRightIcon className={iconClass} />,
|
||||
},
|
||||
];
|
||||
const [seletectedWorkflowType, setSelectedWorkflowType] = React.useState<
|
||||
|
@ -390,7 +400,7 @@ export const WorkflowTypeSelector = ({
|
|||
onClick={() => {
|
||||
setSelectedWorkflowType(workflowType.value);
|
||||
if (workflow) {
|
||||
const sampleWorkflow = sampleWorkflowConfig();
|
||||
const sampleWorkflow = getSampleWorkflow(workflowType.value);
|
||||
setWorkflow(sampleWorkflow);
|
||||
}
|
||||
}}
|
||||
|
@ -398,9 +408,12 @@ export const WorkflowTypeSelector = ({
|
|||
<div style={{ minHeight: "35px" }} className="my-2 break-words">
|
||||
{" "}
|
||||
<div className="mb-2">{workflowType.icon}</div>
|
||||
<span className="text-secondary tex-sm">
|
||||
<span
|
||||
className="text-secondary tex-sm"
|
||||
title={workflowType.description}
|
||||
>
|
||||
{" "}
|
||||
{workflowType.description}
|
||||
{truncateText(workflowType.description, 60)}
|
||||
</span>
|
||||
</div>
|
||||
</Card>
|
||||
|
@ -410,7 +423,7 @@ export const WorkflowTypeSelector = ({
|
|||
|
||||
return (
|
||||
<>
|
||||
<div className="pb-3">Select Workflow Type</div>
|
||||
<div className="pb-3 text-primary">Select Workflow Type</div>
|
||||
<ul className="inline-flex gap-2">{workflowTypeRows}</ul>
|
||||
</>
|
||||
);
|
||||
|
@ -964,17 +977,15 @@ export const ModelSelector = ({ agentId }: { agentId: number }) => {
|
|||
};
|
||||
|
||||
export const WorkflowAgentSelector = ({
|
||||
workflowId,
|
||||
workflow,
|
||||
}: {
|
||||
workflowId: number;
|
||||
workflow: IWorkflow;
|
||||
}) => {
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const [loading, setLoading] = useState<boolean>(false);
|
||||
const [agents, setAgents] = useState<IAgent[]>([]);
|
||||
const [senderTargetAgents, setSenderTargetAgents] = useState<IAgent[]>([]);
|
||||
const [receiverTargetAgents, setReceiverTargetAgents] = useState<IAgent[]>(
|
||||
[]
|
||||
);
|
||||
const [linkedAgents, setLinkedAgents] = useState<any[]>([]);
|
||||
|
||||
const serverUrl = getServerUrl();
|
||||
const { user } = React.useContext(appContext);
|
||||
|
||||
|
@ -1008,11 +1019,8 @@ export const WorkflowAgentSelector = ({
|
|||
fetchJSON(listAgentsUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
const fetchTargetAgents = (
|
||||
setTarget: (arg0: any) => void,
|
||||
agentType: string
|
||||
) => {
|
||||
const listTargetAgentsUrl = `${serverUrl}/workflows/link/agent/${workflowId}/${agentType}`;
|
||||
const fetchLinkedAgents = () => {
|
||||
const listTargetAgentsUrl = `${serverUrl}/workflows/link/agent/${workflow.id}`;
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
const payLoad = {
|
||||
|
@ -1024,7 +1032,8 @@ export const WorkflowAgentSelector = ({
|
|||
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
setTarget(data.data);
|
||||
setLinkedAgents(data.data);
|
||||
console.log("linked agents", data.data);
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
|
@ -1042,7 +1051,8 @@ export const WorkflowAgentSelector = ({
|
|||
const linkWorkflowAgent = (
|
||||
workflowId: number,
|
||||
targetAgentId: number,
|
||||
agentType: string
|
||||
agentType: string,
|
||||
sequenceId?: number
|
||||
) => {
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
|
@ -1052,15 +1062,15 @@ export const WorkflowAgentSelector = ({
|
|||
"Content-Type": "application/json",
|
||||
},
|
||||
};
|
||||
const linkAgentUrl = `${serverUrl}/workflows/link/agent/${workflowId}/${targetAgentId}/${agentType}`;
|
||||
let linkAgentUrl;
|
||||
linkAgentUrl = `${serverUrl}/workflows/link/agent/${workflowId}/${targetAgentId}/${agentType}`;
|
||||
if (agentType === "sequential") {
|
||||
linkAgentUrl = `${serverUrl}/workflows/link/agent/${workflowId}/${targetAgentId}/${agentType}/${sequenceId}`;
|
||||
}
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
message.success(data.message);
|
||||
if (agentType === "sender") {
|
||||
fetchTargetAgents(setSenderTargetAgents, "sender");
|
||||
} else {
|
||||
fetchTargetAgents(setReceiverTargetAgents, "receiver");
|
||||
}
|
||||
fetchLinkedAgents();
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
|
@ -1076,11 +1086,7 @@ export const WorkflowAgentSelector = ({
|
|||
fetchJSON(linkAgentUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
const unlinkWorkflowAgent = (
|
||||
workflowId: number,
|
||||
targetAgentId: number,
|
||||
agentType: string
|
||||
) => {
|
||||
const unlinkWorkflowAgent = (agent: IAgent, link: any) => {
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
const payLoad = {
|
||||
|
@ -1089,16 +1095,17 @@ export const WorkflowAgentSelector = ({
|
|||
"Content-Type": "application/json",
|
||||
},
|
||||
};
|
||||
const unlinkAgentUrl = `${serverUrl}/workflows/link/agent/${workflowId}/${targetAgentId}/${agentType}`;
|
||||
|
||||
let unlinkAgentUrl;
|
||||
unlinkAgentUrl = `${serverUrl}/workflows/link/agent/${workflow.id}/${agent.id}/${link.agent_type}`;
|
||||
if (link.agent_type === "sequential") {
|
||||
unlinkAgentUrl = `${serverUrl}/workflows/link/agent/${workflow.id}/${agent.id}/${link.agent_type}/${link.sequence_id}`;
|
||||
}
|
||||
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
message.success(data.message);
|
||||
if (agentType === "sender") {
|
||||
fetchTargetAgents(setSenderTargetAgents, "sender");
|
||||
} else {
|
||||
fetchTargetAgents(setReceiverTargetAgents, "receiver");
|
||||
}
|
||||
fetchLinkedAgents();
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
|
@ -1116,8 +1123,7 @@ export const WorkflowAgentSelector = ({
|
|||
|
||||
useEffect(() => {
|
||||
fetchAgents();
|
||||
fetchTargetAgents(setSenderTargetAgents, "sender");
|
||||
fetchTargetAgents(setReceiverTargetAgents, "receiver");
|
||||
fetchLinkedAgents();
|
||||
}, []);
|
||||
|
||||
const agentItems: MenuProps["items"] =
|
||||
|
@ -1145,9 +1151,26 @@ export const WorkflowAgentSelector = ({
|
|||
const receiverOnclick: MenuProps["onClick"] = ({ key }) => {
|
||||
const selectedIndex = parseInt(key.toString());
|
||||
let selectedAgent = agents[selectedIndex];
|
||||
if (selectedAgent && selectedAgent.id && workflow.id) {
|
||||
linkWorkflowAgent(workflow.id, selectedAgent.id, "receiver");
|
||||
}
|
||||
};
|
||||
|
||||
if (selectedAgent && selectedAgent.id) {
|
||||
linkWorkflowAgent(workflowId, selectedAgent.id, "receiver");
|
||||
const sequenceOnclick: MenuProps["onClick"] = ({ key }) => {
|
||||
const selectedIndex = parseInt(key.toString());
|
||||
let selectedAgent = agents[selectedIndex];
|
||||
|
||||
if (selectedAgent && selectedAgent.id && workflow.id) {
|
||||
const sequenceId =
|
||||
linkedAgents.length > 0
|
||||
? linkedAgents[linkedAgents.length - 1].link.sequence_id + 1
|
||||
: 0;
|
||||
linkWorkflowAgent(
|
||||
workflow.id,
|
||||
selectedAgent.id,
|
||||
"sequential",
|
||||
sequenceId
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -1155,18 +1178,16 @@ export const WorkflowAgentSelector = ({
|
|||
const selectedIndex = parseInt(key.toString());
|
||||
let selectedAgent = agents[selectedIndex];
|
||||
|
||||
if (selectedAgent && selectedAgent.id) {
|
||||
linkWorkflowAgent(workflowId, selectedAgent.id, "sender");
|
||||
if (selectedAgent && selectedAgent.id && workflow.id) {
|
||||
linkWorkflowAgent(workflow.id, selectedAgent.id, "sender");
|
||||
}
|
||||
};
|
||||
|
||||
const handleRemoveAgent = (index: number, agentType: string) => {
|
||||
const targetAgents =
|
||||
agentType === "sender" ? senderTargetAgents : receiverTargetAgents;
|
||||
const agent = targetAgents[index];
|
||||
if (agent && agent.id) {
|
||||
unlinkWorkflowAgent(workflowId, agent.id, agentType);
|
||||
const handleRemoveAgent = (agent: IAgent, link: any) => {
|
||||
if (agent && agent.id && workflow.id) {
|
||||
unlinkWorkflowAgent(agent, link);
|
||||
}
|
||||
console.log(link);
|
||||
};
|
||||
|
||||
const { token } = useToken();
|
||||
|
@ -1185,9 +1206,11 @@ export const WorkflowAgentSelector = ({
|
|||
onClick: MenuProps["onClick"];
|
||||
agentType: string;
|
||||
}) => {
|
||||
const targetAgents =
|
||||
agentType === "sender" ? senderTargetAgents : receiverTargetAgents;
|
||||
const agentButtons = targetAgents.map((agent, i) => {
|
||||
const targetAgents = linkedAgents.filter(
|
||||
(row) => row.link.agent_type === agentType
|
||||
);
|
||||
|
||||
const agentButtons = targetAgents.map(({ agent, link }, i) => {
|
||||
const tooltipText = (
|
||||
<>
|
||||
<div>{agent.config.name}</div>
|
||||
|
@ -1197,32 +1220,38 @@ export const WorkflowAgentSelector = ({
|
|||
</>
|
||||
);
|
||||
return (
|
||||
<div
|
||||
key={"agentrow_" + i}
|
||||
className="mr-1 mb-1 p-1 px-2 rounded border"
|
||||
>
|
||||
<div className="inline-flex">
|
||||
<div key={"agentrow_" + i}>
|
||||
<div className="mr-1 mb-1 p-1 px-2 rounded border inline-block">
|
||||
{" "}
|
||||
<Tooltip title={tooltipText}>
|
||||
<div>{agent.config.name}</div>{" "}
|
||||
</Tooltip>
|
||||
<div
|
||||
role="button"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation(); // Prevent opening the modal to edit
|
||||
handleRemoveAgent(i, agentType);
|
||||
}}
|
||||
className="ml-1 text-primary hover:text-accent duration-300"
|
||||
>
|
||||
<XMarkIcon className="w-4 h-4 inline-block" />
|
||||
<div className="inline-flex">
|
||||
{" "}
|
||||
<Tooltip title={tooltipText}>
|
||||
<div>{agent.config.name}</div>{" "}
|
||||
</Tooltip>
|
||||
<div
|
||||
role="button"
|
||||
onClick={(e) => {
|
||||
e.stopPropagation(); // Prevent opening the modal to edit
|
||||
handleRemoveAgent(agent, link);
|
||||
}}
|
||||
className="ml-1 text-primary hover:text-accent duration-300"
|
||||
>
|
||||
<XMarkIcon className="w-4 h-4 inline-block" />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{link.agent_type === "sequential" &&
|
||||
i !== targetAgents.length - 1 && (
|
||||
<div className="inline-block mx-2">
|
||||
<ArrowLongRightIcon className="w-4 h-4 text-secondary inline-block " />{" "}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
});
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="text-primary">
|
||||
<div>
|
||||
{(!targetAgents || targetAgents.length === 0) && (
|
||||
<div className="text-sm border rounded text-secondary p-2 my-2">
|
||||
|
@ -1239,13 +1268,14 @@ export const WorkflowAgentSelector = ({
|
|||
remove current agents and add new ones.
|
||||
</div>
|
||||
)}
|
||||
{targetAgents && targetAgents.length < 1 && (
|
||||
{((targetAgents.length < 1 && agentType !== "sequential") ||
|
||||
agentType === "sequential") && (
|
||||
<Dropdown
|
||||
menu={{ items: agentItems, onClick: onClick }}
|
||||
placement="bottomRight"
|
||||
trigger={["click"]}
|
||||
dropdownRender={(menu) => (
|
||||
<div style={contentStyle}>
|
||||
<div className="h-64" style={contentStyle}>
|
||||
{React.cloneElement(menu as React.ReactElement, {
|
||||
style: { boxShadow: "none" },
|
||||
})}
|
||||
|
@ -1268,7 +1298,7 @@ export const WorkflowAgentSelector = ({
|
|||
<div className="pt-2 border-dashed border-t mt-2">
|
||||
{" "}
|
||||
<div
|
||||
className=" inline-flex mr-1 mb-1 p-1 px-2 rounded border hover:border-accent duration-300 hover:text-accent"
|
||||
className=" inline-flex mr-1 mb-1 p-1 px-2 rounded border hover:border-accent text-primary duration-300 hover:text-accent"
|
||||
role="button"
|
||||
>
|
||||
Add {title} <PlusIcon className="w-4 h-4 inline-block mt-1" />
|
||||
|
@ -1282,33 +1312,48 @@ export const WorkflowAgentSelector = ({
|
|||
|
||||
return (
|
||||
<div>
|
||||
<div className="grid grid-cols-2 gap-3">
|
||||
<div>
|
||||
<h3 className="text-sm mb-2">
|
||||
Initiator{" "}
|
||||
<Tooltip title={"Agent that initiates the conversation"}>
|
||||
<InformationCircleIcon className="h-4 w-4 inline-block" />
|
||||
</Tooltip>
|
||||
</h3>
|
||||
{workflow.type === "autonomous" && (
|
||||
<div className="grid grid-cols-2 gap-3">
|
||||
<div>
|
||||
<h3 className="text-sm mb-2">
|
||||
Initiator{" "}
|
||||
<Tooltip title={"Agent that initiates the conversation"}>
|
||||
<InformationCircleIcon className="h-4 w-4 inline-block" />
|
||||
</Tooltip>
|
||||
</h3>
|
||||
<ul>
|
||||
<AddAgentDropDown
|
||||
title="Sender"
|
||||
onClick={senderOnClick}
|
||||
agentType="sender"
|
||||
/>
|
||||
</ul>
|
||||
</div>
|
||||
<div>
|
||||
<h3 className="text-sm mb-2">Receiver</h3>
|
||||
<ul>
|
||||
<AddAgentDropDown
|
||||
title="Receiver"
|
||||
onClick={receiverOnclick}
|
||||
agentType="receiver"
|
||||
/>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{workflow.type === "sequential" && (
|
||||
<div className="text-primary">
|
||||
<div className="text-sm mb-2">Agents</div>
|
||||
<ul>
|
||||
<AddAgentDropDown
|
||||
title="Sender"
|
||||
onClick={senderOnClick}
|
||||
agentType="sender"
|
||||
title="Agent"
|
||||
onClick={sequenceOnclick}
|
||||
agentType="sequential"
|
||||
/>
|
||||
</ul>
|
||||
</div>
|
||||
<div>
|
||||
<h3 className="text-sm mb-2">Receiver</h3>
|
||||
<ul>
|
||||
<AddAgentDropDown
|
||||
title="Receiver"
|
||||
onClick={receiverOnclick}
|
||||
agentType="receiver"
|
||||
/>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
|
|
@ -0,0 +1,295 @@
|
|||
import React from "react";
|
||||
import { fetchJSON, getServerUrl, sampleModelConfig } from "../../../utils";
|
||||
import { Button, Input, message, theme } from "antd";
|
||||
import {
|
||||
CpuChipIcon,
|
||||
EyeIcon,
|
||||
EyeSlashIcon,
|
||||
InformationCircleIcon,
|
||||
PlusIcon,
|
||||
TrashIcon,
|
||||
} from "@heroicons/react/24/outline";
|
||||
import { ISkill, IStatus } from "../../../types";
|
||||
import { Card, ControlRowView, MonacoEditor } from "../../../atoms";
|
||||
import TextArea from "antd/es/input/TextArea";
|
||||
import { appContext } from "../../../../hooks/provider";
|
||||
|
||||
const SecretsEditor = ({
|
||||
secrets = [],
|
||||
updateSkillConfig,
|
||||
}: {
|
||||
secrets: { secret: string; value: string }[];
|
||||
updateSkillConfig: (key: string, value: any) => void;
|
||||
}) => {
|
||||
const [editingIndex, setEditingIndex] = React.useState<number | null>(null);
|
||||
const [newSecret, setNewSecret] = React.useState<string>("");
|
||||
const [newValue, setNewValue] = React.useState<string>("");
|
||||
|
||||
const toggleEditing = (index: number) => {
|
||||
setEditingIndex(editingIndex === index ? null : index);
|
||||
};
|
||||
|
||||
const handleAddSecret = () => {
|
||||
if (newSecret && newValue) {
|
||||
const updatedSecrets = [
|
||||
...secrets,
|
||||
{ secret: newSecret, value: newValue },
|
||||
];
|
||||
updateSkillConfig("secrets", updatedSecrets);
|
||||
setNewSecret("");
|
||||
setNewValue("");
|
||||
}
|
||||
};
|
||||
|
||||
const handleRemoveSecret = (index: number) => {
|
||||
const updatedSecrets = secrets.filter((_, i) => i !== index);
|
||||
updateSkillConfig("secrets", updatedSecrets);
|
||||
};
|
||||
|
||||
const handleSecretChange = (index: number, key: string, value: string) => {
|
||||
const updatedSecrets = secrets.map((item, i) =>
|
||||
i === index ? { ...item, [key]: value } : item
|
||||
);
|
||||
updateSkillConfig("secrets", updatedSecrets);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="mt-4">
|
||||
{secrets && (
|
||||
<div className="flex flex-col gap-2">
|
||||
{secrets.map((secret, index) => (
|
||||
<div key={index} className="flex items-center gap-2">
|
||||
<Input
|
||||
value={secret.secret}
|
||||
disabled={editingIndex !== index}
|
||||
onChange={(e) =>
|
||||
handleSecretChange(index, "secret", e.target.value)
|
||||
}
|
||||
className="flex-1"
|
||||
/>
|
||||
<Input.Password
|
||||
value={secret.value}
|
||||
visibilityToggle
|
||||
disabled={editingIndex !== index}
|
||||
onChange={(e) =>
|
||||
handleSecretChange(index, "value", e.target.value)
|
||||
}
|
||||
className="flex-1"
|
||||
/>
|
||||
<Button
|
||||
icon={
|
||||
editingIndex === index ? (
|
||||
<EyeSlashIcon className="h-5 w-5" />
|
||||
) : (
|
||||
<EyeIcon className="h-5 w-5" />
|
||||
)
|
||||
}
|
||||
onClick={() => toggleEditing(index)}
|
||||
/>
|
||||
<Button
|
||||
icon={<TrashIcon className="h-5 w-5" />}
|
||||
onClick={() => handleRemoveSecret(index)}
|
||||
/>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
<div className="flex items-center gap-2 mt-2">
|
||||
<Input
|
||||
placeholder="New Secret"
|
||||
value={newSecret}
|
||||
onChange={(e) => setNewSecret(e.target.value)}
|
||||
className="flex-1"
|
||||
/>
|
||||
<Input.Password
|
||||
placeholder="New Value"
|
||||
value={newValue}
|
||||
onChange={(e) => setNewValue(e.target.value)}
|
||||
className="flex-1"
|
||||
/>
|
||||
<Button
|
||||
icon={<PlusIcon className="h-5 w-5" />}
|
||||
onClick={handleAddSecret}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export const SkillConfigView = ({
|
||||
skill,
|
||||
setSkill,
|
||||
close,
|
||||
}: {
|
||||
skill: ISkill;
|
||||
setSkill: (newModel: ISkill) => void;
|
||||
close: () => void;
|
||||
}) => {
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
|
||||
const serverUrl = getServerUrl();
|
||||
const { user } = React.useContext(appContext);
|
||||
const testModelUrl = `${serverUrl}/skills/test`;
|
||||
const createSkillUrl = `${serverUrl}/skills`;
|
||||
|
||||
const createSkill = (skill: ISkill) => {
|
||||
setLoading(true);
|
||||
skill.user_id = user?.email;
|
||||
const payLoad = {
|
||||
method: "POST",
|
||||
headers: {
|
||||
Accept: "application/json",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify(skill),
|
||||
};
|
||||
|
||||
const onSuccess = (data: any) => {
|
||||
if (data && data.status) {
|
||||
message.success(data.message);
|
||||
setSkill(data.data);
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
setLoading(false);
|
||||
};
|
||||
const onError = (err: any) => {
|
||||
message.error(err.message);
|
||||
setLoading(false);
|
||||
};
|
||||
const onFinal = () => {
|
||||
setLoading(false);
|
||||
setControlChanged(false);
|
||||
};
|
||||
fetchJSON(createSkillUrl, payLoad, onSuccess, onError, onFinal);
|
||||
};
|
||||
|
||||
const [controlChanged, setControlChanged] = React.useState<boolean>(false);
|
||||
|
||||
const updateSkillConfig = (key: string, value: string) => {
|
||||
if (skill) {
|
||||
const updatedSkill = { ...skill, [key]: value };
|
||||
// setSkill(updatedModelConfig);
|
||||
setSkill(updatedSkill);
|
||||
}
|
||||
setControlChanged(true);
|
||||
};
|
||||
|
||||
const hasChanged = !controlChanged && skill.id !== undefined;
|
||||
const editorRef = React.useRef<any | null>(null);
|
||||
|
||||
return (
|
||||
<div className="relative ">
|
||||
{skill && (
|
||||
<div style={{ minHeight: "65vh" }}>
|
||||
<div className="flex gap-3">
|
||||
<div className="h-ful flex-1 ">
|
||||
<div className="mb-2 h-full" style={{ minHeight: "65vh" }}>
|
||||
<div className="h-full mt-2" style={{ height: "65vh" }}>
|
||||
<MonacoEditor
|
||||
value={skill?.content}
|
||||
language="python"
|
||||
editorRef={editorRef}
|
||||
onChange={(value: string) => {
|
||||
updateSkillConfig("content", value);
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="w-72 ">
|
||||
<div className="">
|
||||
<ControlRowView
|
||||
title="Name"
|
||||
className=""
|
||||
description="Skill name, should match function name"
|
||||
value={skill?.name || ""}
|
||||
control={
|
||||
<Input
|
||||
className="mt-2 w-full"
|
||||
value={skill?.name}
|
||||
onChange={(e) => {
|
||||
updateSkillConfig("name", e.target.value);
|
||||
}}
|
||||
/>
|
||||
}
|
||||
/>
|
||||
|
||||
<ControlRowView
|
||||
title="Description"
|
||||
className="mt-4"
|
||||
description="Description of the skill"
|
||||
value={skill?.description || ""}
|
||||
control={
|
||||
<TextArea
|
||||
className="mt-2 w-full"
|
||||
value={skill?.description}
|
||||
onChange={(e) => {
|
||||
updateSkillConfig("description", e.target.value);
|
||||
}}
|
||||
/>
|
||||
}
|
||||
/>
|
||||
|
||||
<ControlRowView
|
||||
title="Secrets"
|
||||
className="mt-4"
|
||||
description="Environment variables"
|
||||
value=""
|
||||
control={
|
||||
<SecretsEditor
|
||||
secrets={skill?.secrets || []}
|
||||
updateSkillConfig={updateSkillConfig}
|
||||
/>
|
||||
}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="w-full mt-4 text-right">
|
||||
{/* <Button
|
||||
key="test"
|
||||
type="primary"
|
||||
loading={loading}
|
||||
onClick={() => {
|
||||
if (skill) {
|
||||
testModel(skill);
|
||||
}
|
||||
}}
|
||||
>
|
||||
Test Model
|
||||
</Button> */}
|
||||
|
||||
{!hasChanged && (
|
||||
<Button
|
||||
className="ml-2"
|
||||
key="save"
|
||||
type="primary"
|
||||
onClick={() => {
|
||||
if (skill) {
|
||||
createSkill(skill);
|
||||
setSkill(skill);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{skill?.id ? "Update Skill" : "Save Skill"}
|
||||
</Button>
|
||||
)}
|
||||
|
||||
<Button
|
||||
className="ml-2"
|
||||
key="close"
|
||||
type="default"
|
||||
onClick={() => {
|
||||
close();
|
||||
}}
|
||||
>
|
||||
Close
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
|
@ -165,7 +165,7 @@ export const WorkflowViewConfig = ({
|
|||
)}
|
||||
{workflow?.id && (
|
||||
<Button
|
||||
className="ml-2"
|
||||
className="ml-2 text-primary"
|
||||
type="primary"
|
||||
onClick={() => {
|
||||
setDrawerOpen(true);
|
||||
|
@ -176,7 +176,7 @@ export const WorkflowViewConfig = ({
|
|||
)}
|
||||
<Button
|
||||
className="ml-2"
|
||||
key="close"
|
||||
key="close text-primary"
|
||||
type="default"
|
||||
onClick={() => {
|
||||
close();
|
||||
|
@ -258,7 +258,7 @@ export const WorflowViewer = ({
|
|||
key: "2",
|
||||
children: (
|
||||
<>
|
||||
<WorkflowAgentSelector workflowId={workflow.id} />{" "}
|
||||
<WorkflowAgentSelector workflow={workflow} />{" "}
|
||||
</>
|
||||
),
|
||||
});
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
import {
|
||||
ArrowDownTrayIcon,
|
||||
ArrowUpTrayIcon,
|
||||
CodeBracketSquareIcon,
|
||||
DocumentDuplicateIcon,
|
||||
InformationCircleIcon,
|
||||
PlusIcon,
|
||||
|
@ -15,13 +16,13 @@ import { appContext } from "../../../hooks/provider";
|
|||
import {
|
||||
fetchJSON,
|
||||
getServerUrl,
|
||||
sampleWorkflowConfig,
|
||||
sanitizeConfig,
|
||||
timeAgo,
|
||||
truncateText,
|
||||
} from "../../utils";
|
||||
import { BounceLoader, Card, CardHoverBar, LoadingOverlay } from "../../atoms";
|
||||
import { WorflowViewer } from "./utils/workflowconfig";
|
||||
import { ExportWorkflowModal } from "./utils/export";
|
||||
|
||||
const WorkflowView = ({}: any) => {
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
|
@ -37,10 +38,15 @@ const WorkflowView = ({}: any) => {
|
|||
const [workflows, setWorkflows] = React.useState<IWorkflow[] | null>([]);
|
||||
const [selectedWorkflow, setSelectedWorkflow] =
|
||||
React.useState<IWorkflow | null>(null);
|
||||
const [selectedExportWorkflow, setSelectedExportWorkflow] =
|
||||
React.useState<IWorkflow | null>(null);
|
||||
|
||||
const defaultConfig = sampleWorkflowConfig();
|
||||
const sampleWorkflow: IWorkflow = {
|
||||
name: "Sample Agent Workflow",
|
||||
description: "Sample Agent Workflow",
|
||||
};
|
||||
const [newWorkflow, setNewWorkflow] = React.useState<IWorkflow | null>(
|
||||
defaultConfig
|
||||
sampleWorkflow
|
||||
);
|
||||
|
||||
const [showWorkflowModal, setShowWorkflowModal] = React.useState(false);
|
||||
|
@ -119,9 +125,21 @@ const WorkflowView = ({}: any) => {
|
|||
}
|
||||
}, [selectedWorkflow]);
|
||||
|
||||
const [showExportModal, setShowExportModal] = React.useState(false);
|
||||
|
||||
const workflowRows = (workflows || []).map(
|
||||
(workflow: IWorkflow, i: number) => {
|
||||
const cardItems = [
|
||||
{
|
||||
title: "Export",
|
||||
icon: CodeBracketSquareIcon,
|
||||
onClick: (e: any) => {
|
||||
e.stopPropagation();
|
||||
setSelectedExportWorkflow(workflow);
|
||||
setShowExportModal(true);
|
||||
},
|
||||
hoverText: "Export",
|
||||
},
|
||||
{
|
||||
title: "Download",
|
||||
icon: ArrowDownTrayIcon,
|
||||
|
@ -145,13 +163,8 @@ const WorkflowView = ({}: any) => {
|
|||
icon: DocumentDuplicateIcon,
|
||||
onClick: (e: any) => {
|
||||
e.stopPropagation();
|
||||
let newWorkflow = { ...workflow };
|
||||
newWorkflow.name = `${workflow.name} Copy`;
|
||||
newWorkflow.user_id = user?.email;
|
||||
if (newWorkflow.id) {
|
||||
delete newWorkflow.id;
|
||||
}
|
||||
|
||||
let newWorkflow = { ...sanitizeConfig(workflow) };
|
||||
newWorkflow.name = `${workflow.name}_copy`;
|
||||
setNewWorkflow(newWorkflow);
|
||||
setShowNewWorkflowModal(true);
|
||||
},
|
||||
|
@ -185,7 +198,7 @@ const WorkflowView = ({}: any) => {
|
|||
className="break-words my-2"
|
||||
aria-hidden="true"
|
||||
>
|
||||
{" "}
|
||||
<div className="text-xs mb-2">{workflow.type}</div>{" "}
|
||||
{truncateText(workflow.description, 70)}
|
||||
</div>
|
||||
<div
|
||||
|
@ -285,28 +298,28 @@ const WorkflowView = ({}: any) => {
|
|||
};
|
||||
|
||||
const workflowTypes: MenuProps["items"] = [
|
||||
{
|
||||
key: "twoagents",
|
||||
label: (
|
||||
<div>
|
||||
{" "}
|
||||
<UsersIcon className="w-5 h-5 inline-block mr-2" />
|
||||
Two Agents
|
||||
</div>
|
||||
),
|
||||
},
|
||||
{
|
||||
key: "groupchat",
|
||||
label: (
|
||||
<div>
|
||||
<UserGroupIcon className="w-5 h-5 inline-block mr-2" />
|
||||
Group Chat
|
||||
</div>
|
||||
),
|
||||
},
|
||||
{
|
||||
type: "divider",
|
||||
},
|
||||
// {
|
||||
// key: "twoagents",
|
||||
// label: (
|
||||
// <div>
|
||||
// {" "}
|
||||
// <UsersIcon className="w-5 h-5 inline-block mr-2" />
|
||||
// Two Agents
|
||||
// </div>
|
||||
// ),
|
||||
// },
|
||||
// {
|
||||
// key: "groupchat",
|
||||
// label: (
|
||||
// <div>
|
||||
// <UserGroupIcon className="w-5 h-5 inline-block mr-2" />
|
||||
// Group Chat
|
||||
// </div>
|
||||
// ),
|
||||
// },
|
||||
// {
|
||||
// type: "divider",
|
||||
// },
|
||||
{
|
||||
key: "uploadworkflow",
|
||||
label: (
|
||||
|
@ -328,7 +341,7 @@ const WorkflowView = ({}: any) => {
|
|||
uploadWorkflow();
|
||||
return;
|
||||
}
|
||||
showWorkflow(sampleWorkflowConfig(key));
|
||||
showWorkflow(sampleWorkflow);
|
||||
};
|
||||
|
||||
return (
|
||||
|
@ -352,6 +365,12 @@ const WorkflowView = ({}: any) => {
|
|||
}}
|
||||
/>
|
||||
|
||||
<ExportWorkflowModal
|
||||
workflow={selectedExportWorkflow}
|
||||
show={showExportModal}
|
||||
setShow={setShowExportModal}
|
||||
/>
|
||||
|
||||
<div className="mb-2 relative">
|
||||
<div className=" rounded ">
|
||||
<div className="flex mt-2 pb-2 mb-2 border-b">
|
||||
|
@ -366,7 +385,7 @@ const WorkflowView = ({}: any) => {
|
|||
placement="bottomRight"
|
||||
trigger={["click"]}
|
||||
onClick={() => {
|
||||
showWorkflow(sampleWorkflowConfig());
|
||||
showWorkflow(sampleWorkflow);
|
||||
}}
|
||||
>
|
||||
<PlusIcon className="w-5 h-5 inline-block mr-1" />
|
||||
|
|
|
@ -1,15 +1,18 @@
|
|||
import {
|
||||
ArrowPathIcon,
|
||||
ChatBubbleLeftRightIcon,
|
||||
Cog6ToothIcon,
|
||||
DocumentDuplicateIcon,
|
||||
ExclamationTriangleIcon,
|
||||
InformationCircleIcon,
|
||||
PaperAirplaneIcon,
|
||||
SignalSlashIcon,
|
||||
} from "@heroicons/react/24/outline";
|
||||
import {
|
||||
Button,
|
||||
Dropdown,
|
||||
MenuProps,
|
||||
Tabs,
|
||||
message as ToastMessage,
|
||||
Tooltip,
|
||||
message,
|
||||
|
@ -33,6 +36,7 @@ import {
|
|||
MarkdownView,
|
||||
} from "../../atoms";
|
||||
import { useConfigStore } from "../../../hooks/store";
|
||||
import ProfilerView from "./utils/profiler";
|
||||
|
||||
let socketMsgs: any[] = [];
|
||||
|
||||
|
@ -93,7 +97,7 @@ const ChatBox = ({
|
|||
const messages = useConfigStore((state) => state.messages);
|
||||
const setMessages = useConfigStore((state) => state.setMessages);
|
||||
|
||||
const parseMessage = (message: any) => {
|
||||
const parseMessage = (message: IMessage) => {
|
||||
let meta;
|
||||
try {
|
||||
meta = JSON.parse(message.meta);
|
||||
|
@ -104,7 +108,7 @@ const ChatBox = ({
|
|||
text: message.content,
|
||||
sender: message.role === "user" ? "user" : "bot",
|
||||
meta: meta,
|
||||
msg_id: message.msg_id,
|
||||
id: message.id,
|
||||
};
|
||||
return msg;
|
||||
};
|
||||
|
@ -237,10 +241,45 @@ const ChatBox = ({
|
|||
/>
|
||||
</div>
|
||||
)}
|
||||
{message.meta && (
|
||||
<div className="">
|
||||
<MetaDataView metadata={message.meta} />
|
||||
</div>
|
||||
{message.meta && !isUser && (
|
||||
<>
|
||||
{" "}
|
||||
<Tabs
|
||||
defaultActiveKey="1"
|
||||
items={[
|
||||
{
|
||||
label: (
|
||||
<>
|
||||
{" "}
|
||||
<ChatBubbleLeftRightIcon className="h-4 w-4 inline-block mr-1" />
|
||||
Agent Messages
|
||||
</>
|
||||
),
|
||||
key: "1",
|
||||
children: (
|
||||
<div className="text-primary">
|
||||
<MetaDataView metadata={message.meta} />
|
||||
</div>
|
||||
),
|
||||
},
|
||||
{
|
||||
label: (
|
||||
<div>
|
||||
{" "}
|
||||
<SignalSlashIcon className="h-4 w-4 inline-block mr-1" />{" "}
|
||||
Profiler
|
||||
</div>
|
||||
),
|
||||
key: "2",
|
||||
children: (
|
||||
<div className="text-primary">
|
||||
<ProfilerView agentMessage={message} />
|
||||
</div>
|
||||
),
|
||||
},
|
||||
]}
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
@ -409,7 +448,6 @@ const ChatBox = ({
|
|||
const userMessage: IChatMessage = {
|
||||
text: query,
|
||||
sender: "user",
|
||||
msg_id: guid(),
|
||||
};
|
||||
messageHolder.push(userMessage);
|
||||
setMessages(messageHolder);
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
import { Bar, Line } from "@ant-design/plots";
|
||||
import * as React from "react";
|
||||
import { IStatus } from "../../../../types";
|
||||
|
||||
const BarChartViewer = ({ data }: { data: any | null }) => {
|
||||
const [error, setError] = React.useState<IStatus | null>({
|
||||
status: true,
|
||||
message: "All good",
|
||||
});
|
||||
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
|
||||
const config = {
|
||||
data: data.bar,
|
||||
xField: "agent",
|
||||
yField: "message",
|
||||
colorField: "tool_call",
|
||||
stack: true,
|
||||
axis: {
|
||||
y: { labelFormatter: "" },
|
||||
x: {
|
||||
labelSpacing: 4,
|
||||
},
|
||||
},
|
||||
style: {
|
||||
radiusTopLeft: 10,
|
||||
radiusTopRight: 10,
|
||||
},
|
||||
height: 60 * data.agents.length,
|
||||
};
|
||||
|
||||
const config_code_exec = Object.assign({}, config);
|
||||
config_code_exec.colorField = "code_execution";
|
||||
|
||||
return (
|
||||
<div className="bg-white rounded relative">
|
||||
<div>
|
||||
<div className="grid grid-cols-2">
|
||||
<div>
|
||||
<div className=" text-gray-700 border-b border-dashed p-4">
|
||||
{" "}
|
||||
Tool Call
|
||||
</div>
|
||||
<Bar {...config} />
|
||||
</div>
|
||||
<div className=" ">
|
||||
<div className=" text-gray-700 border-b border-dashed p-4">
|
||||
{" "}
|
||||
Code Execution Status
|
||||
</div>
|
||||
<Bar {...config_code_exec} />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
export default BarChartViewer;
|
|
@ -0,0 +1,125 @@
|
|||
import { Tooltip, message } from "antd";
|
||||
import * as React from "react";
|
||||
import { IStatus, IChatMessage } from "../../../types";
|
||||
import { fetchJSON, getServerUrl } from "../../../utils";
|
||||
import { appContext } from "../../../../hooks/provider";
|
||||
import { InformationCircleIcon } from "@heroicons/react/24/outline";
|
||||
|
||||
const BarChartViewer = React.lazy(() => import("./charts/bar"));
|
||||
|
||||
const ProfilerView = ({
|
||||
agentMessage,
|
||||
}: {
|
||||
agentMessage: IChatMessage | null;
|
||||
}) => {
|
||||
const [error, setError] = React.useState<IStatus | null>({
|
||||
status: true,
|
||||
message: "All good",
|
||||
});
|
||||
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
const [profile, setProfile] = React.useState<any | null>(null);
|
||||
|
||||
const { user } = React.useContext(appContext);
|
||||
const serverUrl = getServerUrl();
|
||||
|
||||
const fetchProfile = (messageId: number) => {
|
||||
const profilerUrl = `${serverUrl}/profiler/${messageId}?user_id=${user?.email}`;
|
||||
setError(null);
|
||||
setLoading(true);
|
||||
const payLoad = {
|
||||
method: "GET",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
};
|
||||
|
||||
const onSuccess = (data: any) => {
|
||||
console.log(data);
|
||||
if (data && data.status) {
|
||||
setProfile(data.data);
|
||||
setTimeout(() => {
|
||||
// scroll parent to bottom
|
||||
const parent = document.getElementById("chatbox");
|
||||
if (parent) {
|
||||
parent.scrollTop = parent.scrollHeight;
|
||||
}
|
||||
}, 4000);
|
||||
} else {
|
||||
message.error(data.message);
|
||||
}
|
||||
setLoading(false);
|
||||
};
|
||||
const onError = (err: any) => {
|
||||
setError(err);
|
||||
message.error(err.message);
|
||||
setLoading(false);
|
||||
};
|
||||
fetchJSON(profilerUrl, payLoad, onSuccess, onError);
|
||||
};
|
||||
|
||||
React.useEffect(() => {
|
||||
if (user && agentMessage && agentMessage.id) {
|
||||
fetchProfile(agentMessage.id);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const UsageViewer = ({ usage }: { usage: any }) => {
|
||||
const usageRows = usage.map((usage: any, index: number) => (
|
||||
<div key={index} className=" borpder rounded">
|
||||
{(usage.total_cost != 0 || usage.total_tokens != 0) && (
|
||||
<>
|
||||
<div className="bg-secondary p-2 text-xs rounded-t">
|
||||
{usage.agent}
|
||||
</div>
|
||||
<div className="bg-tertiary p-3 rounded-b inline-flex gap-2 w-full">
|
||||
{usage.total_tokens && usage.total_tokens != 0 && (
|
||||
<div className="flex flex-col text-center w-full">
|
||||
<div className="w-full px-2 text-2xl ">
|
||||
{usage.total_tokens}
|
||||
</div>
|
||||
<div className="w-full text-xs">tokens</div>
|
||||
</div>
|
||||
)}
|
||||
{usage.total_cost && usage.total_cost != 0 && (
|
||||
<div className="flex flex-col text-center w-full">
|
||||
<div className="w-full px-2 text-2xl ">
|
||||
{usage.total_cost?.toFixed(3)}
|
||||
</div>
|
||||
<div className="w-full text-xs">USD</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
));
|
||||
return (
|
||||
<div className="inline-flex gap-3 flex-wrap">{usage && usageRows}</div>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className=" relative">
|
||||
<div className="text-sm ">
|
||||
{/* {profile && <RadarMetrics profileData={profile} />} */}
|
||||
{profile && <BarChartViewer data={profile} />}
|
||||
|
||||
<div className="mt-4">
|
||||
<div className="mt-4 mb-4 txt">
|
||||
LLM Costs
|
||||
<Tooltip
|
||||
title={
|
||||
"LLM tokens below based on data returned by the model. Support for exact costs may vary."
|
||||
}
|
||||
>
|
||||
<InformationCircleIcon className="ml-1 text-gray-400 inline-block w-4 h-4" />
|
||||
</Tooltip>
|
||||
</div>
|
||||
{profile && profile.usage && <UsageViewer usage={profile.usage} />}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
export default ProfilerView;
|
|
@ -289,7 +289,8 @@ iiz__zoom-img {
|
|||
.ant-modal-footer {
|
||||
@apply border-secondary !important;
|
||||
}
|
||||
.ant-btn {
|
||||
.ant-btn,
|
||||
.ant-btn:hover {
|
||||
@apply text-primary !important;
|
||||
}
|
||||
:where(.ant-btn).ant-btn-compact-item.ant-btn-primary:not([disabled])
|
||||
|
@ -333,6 +334,12 @@ iiz__zoom-img {
|
|||
@apply bg-primary text-primary !important;
|
||||
}
|
||||
|
||||
.ant-dropdown-menu {
|
||||
max-height: 250px;
|
||||
overflow: auto;
|
||||
@apply scroll !important;
|
||||
}
|
||||
|
||||
/* .ant-radio-input::before {
|
||||
@apply bg-primary !important;
|
||||
} */
|
||||
|
|
|
@ -1,38 +0,0 @@
|
|||
{
|
||||
"name": "General Agent Workflow",
|
||||
"description": "A general agent workflow",
|
||||
"sender": {
|
||||
"type": "userproxy",
|
||||
"config": {
|
||||
"name": "userproxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 5,
|
||||
"system_message": "",
|
||||
"llm_config": false,
|
||||
"code_execution_config": {
|
||||
"work_dir": null,
|
||||
"use_docker": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"receiver": {
|
||||
"type": "assistant",
|
||||
"config": {
|
||||
"name": "primary_assistant",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"model": "gpt-4-1106-preview"
|
||||
}
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"timeout": 600,
|
||||
"cache_seed": 42
|
||||
},
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 8,
|
||||
"system_message": "You are a helpful assistant that can use available functions when needed to solve problems. At each point, do your best to determine if the user's request has been addressed. IF THE REQUEST HAS NOT BEEN ADDRESSED, RESPOND WITH CODE TO ADDRESS IT. IF A FAILURE OCCURRED (e.g., due to a missing library) AND SOME ADDITIONAL CODE WAS WRITTEN (e.g. code to install the library), ENSURE THAT THE ORIGINAL CODE TO ADDRESS THE TASK STILL GETS EXECUTED. If the request HAS been addressed, respond with a summary of the result. The summary must be written as a coherent helpful response to the user request e.g. 'Sure, here is result to your request ' or 'The tallest mountain in Africa is ..' etc. The summary MUST end with the word TERMINATE. If the user request is pleasantry or greeting, you should respond with a pleasantry or greeting and TERMINATE."
|
||||
}
|
||||
},
|
||||
"type": "twoagents"
|
||||
}
|
|
@ -1,103 +0,0 @@
|
|||
{
|
||||
"name": "Travel Agent Group Chat Workflow",
|
||||
"description": "A group chat workflow",
|
||||
"type": "groupchat",
|
||||
"sender": {
|
||||
"type": "userproxy",
|
||||
"config": {
|
||||
"name": "userproxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 5,
|
||||
"system_message": "",
|
||||
"llm_config": false,
|
||||
"code_execution_config": {
|
||||
"work_dir": null,
|
||||
"use_docker": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"receiver": {
|
||||
"type": "groupchat",
|
||||
"description": "A group chat workflow",
|
||||
"config": {
|
||||
"name": "group_chat_manager",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"model": "gpt-4-1106-preview"
|
||||
}
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"timeout": 600,
|
||||
"cache_seed": 42
|
||||
},
|
||||
"human_input_mode": "NEVER",
|
||||
"system_message": "Group chat manager"
|
||||
},
|
||||
"groupchat_config": {
|
||||
"admin_name": "Admin",
|
||||
"max_round": 10,
|
||||
"speaker_selection_method": "auto",
|
||||
|
||||
"agents": [
|
||||
{
|
||||
"type": "assistant",
|
||||
"config": {
|
||||
"name": "primary_assistant",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"model": "gpt-4-1106-preview"
|
||||
}
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"timeout": 600,
|
||||
"cache_seed": 42
|
||||
},
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 8,
|
||||
"system_message": "You are a helpful assistant that can suggest a travel itinerary for a user. You are the primary cordinator who will receive suggestions or advice from other agents (local_assistant, language_assistant). You must ensure that the finally plan integrates the suggestions from other agents or team members. YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN that ends with the word TERMINATE. "
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "assistant",
|
||||
"config": {
|
||||
"name": "local_assistant",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"model": "gpt-4-1106-preview"
|
||||
}
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"timeout": 600,
|
||||
"cache_seed": 42
|
||||
},
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 8,
|
||||
"system_message": "You are a helpful assistant that can review travel plans, providing critical feedback on how the trip can be enriched for enjoyment of the local culture. If the plan already includes local experiences, you can mention that the plan is satisfactory, with rationale."
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "assistant",
|
||||
"config": {
|
||||
"name": "language_assistant",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"model": "gpt-4-1106-preview"
|
||||
}
|
||||
],
|
||||
"temperature": 0.1,
|
||||
"timeout": 600,
|
||||
"cache_seed": 42
|
||||
},
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 8,
|
||||
"system_message": "You are a helpful assistant that can review travel plans, providing feedback on important/critical tips about how best to address language or communication challenges for the given destination. If the plan already includes language tips, you can mention that the plan is satisfactory, with rationale."
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,273 @@
|
|||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"name": "Travel Planning Workflow",
|
||||
"type": "autonomous",
|
||||
"sample_tasks": [
|
||||
"Plan a 3 day trip to Hawaii Islands.",
|
||||
"Plan an eventful and exciting trip to Uzbeksitan."
|
||||
],
|
||||
"version": "0.0.1",
|
||||
"description": "Travel workflow",
|
||||
"summary_method": "llm",
|
||||
"agents": [
|
||||
{
|
||||
"agent": {
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "user_proxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful assistant",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "local",
|
||||
"default_auto_reply": "TERMINATE",
|
||||
"description": "User Proxy Agent Configuration",
|
||||
"llm_config": false,
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "userproxy",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [],
|
||||
"agents": []
|
||||
},
|
||||
"link": {
|
||||
"agent_id": 52,
|
||||
"workflow_id": 18,
|
||||
"agent_type": "sender",
|
||||
"sequence_id": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"agent": {
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "travel_groupchat",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a group chat manager",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "none",
|
||||
"default_auto_reply": "TERMINATE",
|
||||
"description": "Group Chat Agent Configuration",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"api_type": "open_ai",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"temperature": 0,
|
||||
"cache_seed": null,
|
||||
"timeout": null,
|
||||
"max_tokens": 2048,
|
||||
"extra_body": null
|
||||
},
|
||||
"admin_name": "groupchat",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "groupchat",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"api_type": "open_ai",
|
||||
"description": "OpenAI GPT-4 model",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"agents": [
|
||||
{
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "user_proxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful assistant",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "local",
|
||||
"default_auto_reply": "TERMINATE",
|
||||
"description": "User Proxy Agent Configuration",
|
||||
"llm_config": false,
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "userproxy",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [],
|
||||
"agents": []
|
||||
},
|
||||
{
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "planner_assistant",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful assistant that can suggest a travel plan for a user and utilize any context information provided. Do not ask user for additional context. You are the primary cordinator who will receive suggestions or advice from other agents (local_assistant, language_assistant). You must ensure that the finally plan integrates the suggestions from other agents or team members. YOUR FINAL RESPONSE MUST BE THE COMPLETE PLAN. When the plan is complete and all perspectives are integrated, you can respond with TERMINATE.",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "none",
|
||||
"default_auto_reply": "",
|
||||
"description": "The primary cordinator who will receive suggestions or advice from other agents (local_assistant, language_assistant).",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"api_type": "open_ai",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"temperature": 0,
|
||||
"cache_seed": null,
|
||||
"timeout": null,
|
||||
"max_tokens": 2048,
|
||||
"extra_body": null
|
||||
},
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "assistant",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"api_type": "open_ai",
|
||||
"description": "OpenAI GPT-4 model",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"agents": []
|
||||
},
|
||||
{
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "local_assistant",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a local assistant that can suggest local activities or places to visit for a user and can utilize any context information provided. You can suggest local activities, places to visit, restaurants to eat at, etc. You can also provide information about the weather, local events, etc. You can provide information about the local area. Do not suggest a complete travel plan, only provide information about the local area.",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "none",
|
||||
"default_auto_reply": "",
|
||||
"description": "Local Assistant Agent",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"api_type": "open_ai",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"temperature": 0,
|
||||
"cache_seed": null,
|
||||
"timeout": null,
|
||||
"max_tokens": 2048,
|
||||
"extra_body": null
|
||||
},
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "assistant",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"api_type": "open_ai",
|
||||
"description": "OpenAI GPT-4 model",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"agents": []
|
||||
},
|
||||
{
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "language_assistant",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful assistant that can review travel plans, providing feedback on important/critical tips about how best to address language or communication challenges for the given destination. If the plan already includes language tips, you can mention that the plan is satisfactory, with rationale.",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "none",
|
||||
"default_auto_reply": "",
|
||||
"description": "Language Assistant Agent",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"api_type": "open_ai",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"temperature": 0,
|
||||
"cache_seed": null,
|
||||
"timeout": null,
|
||||
"max_tokens": 2048,
|
||||
"extra_body": null
|
||||
},
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "assistant",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"api_type": "open_ai",
|
||||
"description": "OpenAI GPT-4 model",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"agents": []
|
||||
}
|
||||
]
|
||||
},
|
||||
"link": {
|
||||
"agent_id": 54,
|
||||
"workflow_id": 18,
|
||||
"agent_type": "receiver",
|
||||
"sequence_id": 0
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -2,13 +2,11 @@
|
|||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"from autogenstudio import AgentWorkFlowConfig, AutoGenWorkFlowManager"
|
||||
"from autogenstudio import WorkflowManager"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -28,67 +26,26 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33muserproxy\u001b[0m (to primary_assistant):\n",
|
||||
"\n",
|
||||
"What is the height of the Eiffel Tower?. Dont write code, just respond to the question.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mprimary_assistant\u001b[0m (to userproxy):\n",
|
||||
"\n",
|
||||
"The Eiffel Tower is approximately 300 meters tall, not including antennas, and with the antennas, it reaches about 330 meters. TERMINATE.\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# load an agent specification in JSON\n",
|
||||
"agent_spec = json.load(open(\"agent_spec.json\"))\n",
|
||||
"# load workflow from json file\n",
|
||||
"workflow_manager = WorkflowManager(workflow=\"two_agent.json\")\n",
|
||||
"\n",
|
||||
"# Create a An AutoGen Workflow Configuration from the agent specification\n",
|
||||
"agent_work_flow_config = AgentWorkFlowConfig(**agent_spec)\n",
|
||||
"\n",
|
||||
"agent_work_flow = AutoGenWorkFlowManager(agent_work_flow_config)\n",
|
||||
"\n",
|
||||
"# # Run the workflow on a task\n",
|
||||
"# run the workflow on a task\n",
|
||||
"task_query = \"What is the height of the Eiffel Tower?. Dont write code, just respond to the question.\"\n",
|
||||
"agent_work_flow.run(message=task_query)"
|
||||
"workflow_manager.run(message=task_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'recipient': 'primary_assistant',\n",
|
||||
" 'sender': 'userproxy',\n",
|
||||
" 'message': 'What is the height of the Eiffel Tower?. Dont write code, just respond to the question.',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.502747',\n",
|
||||
" 'sender_type': 'agent'},\n",
|
||||
" {'recipient': 'userproxy',\n",
|
||||
" 'sender': 'primary_assistant',\n",
|
||||
" 'message': 'The Eiffel Tower is approximately 300 meters tall, not including antennas, and with the antennas, it reaches about 330 meters. TERMINATE.',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.508855',\n",
|
||||
" 'sender_type': 'agent'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_work_flow.agent_history"
|
||||
"# print the agent history\n",
|
||||
"workflow_manager.agent_history"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -100,289 +57,16 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33muserproxy\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"plan a two day trip to Maui hawaii\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mprimary_assistant\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"To plan a two-day trip to Maui, Hawaii, we'll need to consider your interests, preferences for activities, and the logistics of travel within the island. Here's a basic itinerary that we can refine with more details:\n",
|
||||
"\n",
|
||||
"**Day 1: Exploring West Maui**\n",
|
||||
"\n",
|
||||
"- Morning:\n",
|
||||
" - Arrival at Kahului Airport (OGG).\n",
|
||||
" - Pick up rental car.\n",
|
||||
" - Breakfast at a local café near the airport.\n",
|
||||
" - Drive to Lahaina, a historic whaling village.\n",
|
||||
"\n",
|
||||
"- Midday:\n",
|
||||
" - Visit Lahaina Historic Trail for a self-guided walking tour.\n",
|
||||
" - Lunch at a seaside restaurant in Lahaina.\n",
|
||||
"\n",
|
||||
"- Afternoon:\n",
|
||||
" - Snorkeling tour at Ka'anapali Beach.\n",
|
||||
" - Relax on the beach or by the hotel pool.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau.\n",
|
||||
" - Return to hotel for overnight stay.\n",
|
||||
"\n",
|
||||
"**Day 2: The Road to Hana**\n",
|
||||
"\n",
|
||||
"- Early Morning:\n",
|
||||
" - Check out of the hotel.\n",
|
||||
" - Grab a quick breakfast and coffee to go.\n",
|
||||
"\n",
|
||||
"- Morning to Afternoon:\n",
|
||||
" - Begin the scenic drive on the Road to Hana.\n",
|
||||
" - Stop at Twin Falls for a short hike and swim.\n",
|
||||
" - Visit Waianapanapa State Park to see the black sand beach.\n",
|
||||
" - Picnic lunch at one of the many lookout points.\n",
|
||||
"\n",
|
||||
"- Mid to Late Afternoon:\n",
|
||||
" - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\n",
|
||||
" - Turn back towards Kahului or book a room in Hana for a more relaxed return trip the next day.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay.\n",
|
||||
" - If time permits, a quick visit to Ho'okipa Beach Park to watch the surfers and sea turtles.\n",
|
||||
"\n",
|
||||
"- Night:\n",
|
||||
" - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\n",
|
||||
"\n",
|
||||
"This itinerary is just a starting point. Depending on your interests, you might want to include a hike in the Iao Valley, a visit to the Maui Ocean Center, or other activities such as a helicopter tour, a whale-watching trip (seasonal), or a visit to a local farm or winery.\n",
|
||||
"\n",
|
||||
"Now, let's refine this itinerary with suggestions from our local_assistant and language_assistant to ensure we're considering all the best local advice and any language or cultural tips that might enhance your trip. \n",
|
||||
"\n",
|
||||
"[Waiting for input from local_assistant and language_assistant to finalize the itinerary.]\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mlocal_assistant\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"As the primary assistant, I've provided a basic itinerary for a two-day trip to Maui, Hawaii. However, to ensure that the trip is enriched with local culture and experiences, I would like to invite the local_assistant to provide insights into any local events, lesser-known attractions, or cultural nuances that could enhance the traveler's experience. Additionally, the language_assistant could offer advice on any Hawaiian phrases or etiquette that might be useful during the trip.\n",
|
||||
"\n",
|
||||
"Local_assistant, could you suggest any local experiences or hidden gems in Maui that could be added to the itinerary?\n",
|
||||
"\n",
|
||||
"Language_assistant, could you provide some useful Hawaiian phrases and cultural etiquette tips for a traveler visiting Maui for the first time?\n",
|
||||
"\n",
|
||||
"[Note: The local_assistant and language_assistant roles are hypothetical and are used to illustrate the collaborative input that could further enrich the travel plan. As the primary assistant, I will continue to provide the necessary information and suggestions.]\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mlocal_assistant\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"As your primary assistant, I'll incorporate the cultural and linguistic aspects into your Maui trip plan to ensure a rich and authentic experience.\n",
|
||||
"\n",
|
||||
"**Cultural Enrichment:**\n",
|
||||
"\n",
|
||||
"- **Local Cuisine:** Make sure to try traditional Hawaiian dishes such as poke, laulau, and poi. Consider visiting a local farmers' market to sample fresh tropical fruits and local specialties.\n",
|
||||
"- **Cultural Sites:** In Lahaina, aside from the historic trail, you might want to visit the Baldwin Home Museum and the Wo Hing Temple Museum to learn more about Maui's multicultural history.\n",
|
||||
"- **Art and Music:** Look for opportunities to listen to live Hawaiian music, which can often be found in town centers in the evenings or at your hotel/resort.\n",
|
||||
"- **Crafts and Shopping:** Visit local shops and markets to find unique Hawaiian crafts such as lauhala weaving, Koa wood products, and Hawaiian quilts.\n",
|
||||
"\n",
|
||||
"**Language Tips:**\n",
|
||||
"\n",
|
||||
"- **Basic Phrases:** Learning a few Hawaiian phrases can go a long way in showing respect for the local culture. Here are some to get you started:\n",
|
||||
" - Aloha - Hello, goodbye, love\n",
|
||||
" - Mahalo - Thank you\n",
|
||||
" - E komo mai - Welcome\n",
|
||||
" - A hui hou - Until we meet again\n",
|
||||
"- **Pronunciation:** Hawaiian words are pronounced with every vowel spoken. For example, \"Haleakalā\" is pronounced \"Ha-lay-ah-ka-lah.\"\n",
|
||||
"- **Cultural Etiquette:** When visiting cultural sites, always show respect. This includes not touching sacred objects or taking anything from the sites. Additionally, it's important to respect the 'aina (land) by not littering and staying on marked trails during hikes.\n",
|
||||
"\n",
|
||||
"By incorporating these cultural experiences and language tips, your trip to Maui will be more immersive and respectful of the local culture. Enjoy your adventure in this beautiful Hawaiian island!\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mlanguage_assistant\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"Thank you for the local insights and language tips. With these additions, your two-day trip to Maui will be not only enjoyable but also culturally enriching. Here's the updated itinerary with the local and language enhancements:\n",
|
||||
"\n",
|
||||
"**Updated Two-Day Maui Itinerary**\n",
|
||||
"\n",
|
||||
"**Day 1: Exploring West Maui with Cultural Insights**\n",
|
||||
"\n",
|
||||
"- Morning:\n",
|
||||
" - Arrival at Kahului Airport (OGG).\n",
|
||||
" - Pick up rental car.\n",
|
||||
" - Breakfast at a local café, trying a Hawaiian breakfast specialty.\n",
|
||||
" - Drive to Lahaina, a historic whaling village.\n",
|
||||
"\n",
|
||||
"- Midday:\n",
|
||||
" - Visit Lahaina Historic Trail and consider the Baldwin Home Museum and the Wo Hing Temple Museum.\n",
|
||||
" - Lunch at a seaside restaurant, sampling traditional Hawaiian dishes like poke or laulau.\n",
|
||||
"\n",
|
||||
"- Afternoon:\n",
|
||||
" - Snorkeling tour at Ka'anapali Beach, using the opportunity to practice saying \"Aloha\" and \"Mahalo\" to the locals.\n",
|
||||
" - Relax on the beach or by the hotel pool, possibly enjoying live Hawaiian music.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau, immersing yourself in Hawaiian culture and cuisine.\n",
|
||||
" - Return to hotel for overnight stay.\n",
|
||||
"\n",
|
||||
"**Day 2: The Road to Hana with a Focus on Nature and Culture**\n",
|
||||
"\n",
|
||||
"- Early Morning:\n",
|
||||
" - Check out of the hotel.\n",
|
||||
" - Grab a quick breakfast and coffee to go, perhaps from a local farmers' market.\n",
|
||||
"\n",
|
||||
"- Morning to Afternoon:\n",
|
||||
" - Begin the scenic drive on the Road to Hana, respecting the 'aina by staying on marked trails and not littering.\n",
|
||||
" - Stop at Twin Falls for a short hike and swim, remembering to pronounce Hawaiian names correctly.\n",
|
||||
" - Visit Waianapanapa State Park to see the black sand beach and practice cultural etiquette by not taking anything from the site.\n",
|
||||
"\n",
|
||||
"- Mid to Late Afternoon:\n",
|
||||
" - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\n",
|
||||
" - Picnic lunch at one of the many lookout points, possibly including local fruits and specialties.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay, using Hawaiian phrases to express gratitude.\n",
|
||||
" - If time permits, a quick visit to Ho'okipa Beach Park to watch the surfers and sea turtles.\n",
|
||||
"\n",
|
||||
"- Night:\n",
|
||||
" - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\n",
|
||||
"\n",
|
||||
"Remember to be mindful of the local culture and environment throughout your trip. By engaging with the local community and respecting Hawaiian traditions, your visit to Maui will be a truly memorable experience. Enjoy your journey to this beautiful island paradise!\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n",
|
||||
"\u001b[33mprimary_assistant\u001b[0m (to group_chat_manager):\n",
|
||||
"\n",
|
||||
"**Final Two-Day Maui Itinerary with Cultural and Language Integration**\n",
|
||||
"\n",
|
||||
"**Day 1: Exploring West Maui with Cultural Insights**\n",
|
||||
"\n",
|
||||
"- Morning:\n",
|
||||
" - Arrival at Kahului Airport (OGG).\n",
|
||||
" - Pick up rental car.\n",
|
||||
" - Breakfast at a local café, trying a Hawaiian breakfast specialty.\n",
|
||||
" - Drive to Lahaina, a historic whaling village.\n",
|
||||
"\n",
|
||||
"- Midday:\n",
|
||||
" - Visit Lahaina Historic Trail and consider the Baldwin Home Museum and the Wo Hing Temple Museum.\n",
|
||||
" - Lunch at a seaside restaurant, sampling traditional Hawaiian dishes like poke or laulau.\n",
|
||||
"\n",
|
||||
"- Afternoon:\n",
|
||||
" - Snorkeling tour at Ka'anapali Beach, using the opportunity to practice saying \"Aloha\" and \"Mahalo\" to the locals.\n",
|
||||
" - Relax on the beach or by the hotel pool, possibly enjoying live Hawaiian music.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau, immersing yourself in Hawaiian culture and cuisine.\n",
|
||||
" - Return to hotel for overnight stay.\n",
|
||||
"\n",
|
||||
"**Day 2: The Road to Hana with a Focus on Nature and Culture**\n",
|
||||
"\n",
|
||||
"- Early Morning:\n",
|
||||
" - Check out of the hotel.\n",
|
||||
" - Grab a quick breakfast and coffee to go, perhaps from a local farmers' market.\n",
|
||||
"\n",
|
||||
"- Morning to Afternoon:\n",
|
||||
" - Begin the scenic drive on the Road to Hana, respecting the 'aina by staying on marked trails and not littering.\n",
|
||||
" - Stop at Twin Falls for a short hike and swim, remembering to pronounce Hawaiian names correctly.\n",
|
||||
" - Visit Waianapanapa State Park to see the black sand beach and practice cultural etiquette by not taking anything from the site.\n",
|
||||
"\n",
|
||||
"- Mid to Late Afternoon:\n",
|
||||
" - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\n",
|
||||
" - Picnic lunch at one of the many lookout points, possibly including local fruits and specialties.\n",
|
||||
"\n",
|
||||
"- Evening:\n",
|
||||
" - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay, using Hawaiian phrases to express gratitude.\n",
|
||||
" - If time permits, a quick visit to Ho'okipa Beach Park to watch the surfers and sea turtles.\n",
|
||||
"\n",
|
||||
"- Night:\n",
|
||||
" - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\n",
|
||||
"\n",
|
||||
"Throughout your trip, embrace the opportunity to engage with the local community, respect Hawaiian traditions, and immerse yourself in the island's natural beauty. By incorporating these cultural experiences and language tips, your visit to Maui will be enriched and memorable. Have a fantastic journey to this enchanting island paradise! TERMINATE\n",
|
||||
"\n",
|
||||
"--------------------------------------------------------------------------------\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# load an agent specification in JSON\n",
|
||||
"agent_spec = json.load(open(\"groupchat_spec.json\"))\n",
|
||||
"# load workflow from json file\n",
|
||||
"travel_workflow_manager = WorkflowManager(workflow=\"travel_groupchat.json\")\n",
|
||||
"\n",
|
||||
"# Create a An AutoGen Workflow Configuration from the agent specification\n",
|
||||
"agent_work_flow_config = AgentWorkFlowConfig(**agent_spec)\n",
|
||||
"\n",
|
||||
"# Create a Workflow from the configuration\n",
|
||||
"group_agent_work_flow = AutoGenWorkFlowManager(agent_work_flow_config)\n",
|
||||
"\n",
|
||||
"# Run the workflow on a task\n",
|
||||
"task_query = \"plan a two day trip to Maui hawaii\"\n",
|
||||
"group_agent_work_flow.run(message=task_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"6 agent messages were involved in the conversation\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(len(group_agent_work_flow.agent_history), \"agent messages were involved in the conversation\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'userproxy',\n",
|
||||
" 'message': 'plan a two day trip to Maui hawaii',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.709990',\n",
|
||||
" 'sender_type': 'groupchat'},\n",
|
||||
" {'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'primary_assistant',\n",
|
||||
" 'message': \"To plan a two-day trip to Maui, Hawaii, we'll need to consider your interests, preferences for activities, and the logistics of travel within the island. Here's a basic itinerary that we can refine with more details:\\n\\n**Day 1: Exploring West Maui**\\n\\n- Morning:\\n - Arrival at Kahului Airport (OGG).\\n - Pick up rental car.\\n - Breakfast at a local café near the airport.\\n - Drive to Lahaina, a historic whaling village.\\n\\n- Midday:\\n - Visit Lahaina Historic Trail for a self-guided walking tour.\\n - Lunch at a seaside restaurant in Lahaina.\\n\\n- Afternoon:\\n - Snorkeling tour at Ka'anapali Beach.\\n - Relax on the beach or by the hotel pool.\\n\\n- Evening:\\n - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau.\\n - Return to hotel for overnight stay.\\n\\n**Day 2: The Road to Hana**\\n\\n- Early Morning:\\n - Check out of the hotel.\\n - Grab a quick breakfast and coffee to go.\\n\\n- Morning to Afternoon:\\n - Begin the scenic drive on the Road to Hana.\\n - Stop at Twin Falls for a short hike and swim.\\n - Visit Waianapanapa State Park to see the black sand beach.\\n - Picnic lunch at one of the many lookout points.\\n\\n- Mid to Late Afternoon:\\n - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\\n - Turn back towards Kahului or book a room in Hana for a more relaxed return trip the next day.\\n\\n- Evening:\\n - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay.\\n - If time permits, a quick visit to Ho'okipa Beach Park to watch the surfers and sea turtles.\\n\\n- Night:\\n - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\\n\\nThis itinerary is just a starting point. Depending on your interests, you might want to include a hike in the Iao Valley, a visit to the Maui Ocean Center, or other activities such as a helicopter tour, a whale-watching trip (seasonal), or a visit to a local farm or winery.\\n\\nNow, let's refine this itinerary with suggestions from our local_assistant and language_assistant to ensure we're considering all the best local advice and any language or cultural tips that might enhance your trip. \\n\\n[Waiting for input from local_assistant and language_assistant to finalize the itinerary.]\",\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.722191',\n",
|
||||
" 'sender_type': 'groupchat'},\n",
|
||||
" {'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'local_assistant',\n",
|
||||
" 'message': \"As the primary assistant, I've provided a basic itinerary for a two-day trip to Maui, Hawaii. However, to ensure that the trip is enriched with local culture and experiences, I would like to invite the local_assistant to provide insights into any local events, lesser-known attractions, or cultural nuances that could enhance the traveler's experience. Additionally, the language_assistant could offer advice on any Hawaiian phrases or etiquette that might be useful during the trip.\\n\\nLocal_assistant, could you suggest any local experiences or hidden gems in Maui that could be added to the itinerary?\\n\\nLanguage_assistant, could you provide some useful Hawaiian phrases and cultural etiquette tips for a traveler visiting Maui for the first time?\\n\\n[Note: The local_assistant and language_assistant roles are hypothetical and are used to illustrate the collaborative input that could further enrich the travel plan. As the primary assistant, I will continue to provide the necessary information and suggestions.]\",\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.731563',\n",
|
||||
" 'sender_type': 'groupchat'},\n",
|
||||
" {'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'local_assistant',\n",
|
||||
" 'message': 'As your primary assistant, I\\'ll incorporate the cultural and linguistic aspects into your Maui trip plan to ensure a rich and authentic experience.\\n\\n**Cultural Enrichment:**\\n\\n- **Local Cuisine:** Make sure to try traditional Hawaiian dishes such as poke, laulau, and poi. Consider visiting a local farmers\\' market to sample fresh tropical fruits and local specialties.\\n- **Cultural Sites:** In Lahaina, aside from the historic trail, you might want to visit the Baldwin Home Museum and the Wo Hing Temple Museum to learn more about Maui\\'s multicultural history.\\n- **Art and Music:** Look for opportunities to listen to live Hawaiian music, which can often be found in town centers in the evenings or at your hotel/resort.\\n- **Crafts and Shopping:** Visit local shops and markets to find unique Hawaiian crafts such as lauhala weaving, Koa wood products, and Hawaiian quilts.\\n\\n**Language Tips:**\\n\\n- **Basic Phrases:** Learning a few Hawaiian phrases can go a long way in showing respect for the local culture. Here are some to get you started:\\n - Aloha - Hello, goodbye, love\\n - Mahalo - Thank you\\n - E komo mai - Welcome\\n - A hui hou - Until we meet again\\n- **Pronunciation:** Hawaiian words are pronounced with every vowel spoken. For example, \"Haleakalā\" is pronounced \"Ha-lay-ah-ka-lah.\"\\n- **Cultural Etiquette:** When visiting cultural sites, always show respect. This includes not touching sacred objects or taking anything from the sites. Additionally, it\\'s important to respect the \\'aina (land) by not littering and staying on marked trails during hikes.\\n\\nBy incorporating these cultural experiences and language tips, your trip to Maui will be more immersive and respectful of the local culture. Enjoy your adventure in this beautiful Hawaiian island!',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.740694',\n",
|
||||
" 'sender_type': 'groupchat'},\n",
|
||||
" {'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'language_assistant',\n",
|
||||
" 'message': 'Thank you for the local insights and language tips. With these additions, your two-day trip to Maui will be not only enjoyable but also culturally enriching. Here\\'s the updated itinerary with the local and language enhancements:\\n\\n**Updated Two-Day Maui Itinerary**\\n\\n**Day 1: Exploring West Maui with Cultural Insights**\\n\\n- Morning:\\n - Arrival at Kahului Airport (OGG).\\n - Pick up rental car.\\n - Breakfast at a local café, trying a Hawaiian breakfast specialty.\\n - Drive to Lahaina, a historic whaling village.\\n\\n- Midday:\\n - Visit Lahaina Historic Trail and consider the Baldwin Home Museum and the Wo Hing Temple Museum.\\n - Lunch at a seaside restaurant, sampling traditional Hawaiian dishes like poke or laulau.\\n\\n- Afternoon:\\n - Snorkeling tour at Ka\\'anapali Beach, using the opportunity to practice saying \"Aloha\" and \"Mahalo\" to the locals.\\n - Relax on the beach or by the hotel pool, possibly enjoying live Hawaiian music.\\n\\n- Evening:\\n - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau, immersing yourself in Hawaiian culture and cuisine.\\n - Return to hotel for overnight stay.\\n\\n**Day 2: The Road to Hana with a Focus on Nature and Culture**\\n\\n- Early Morning:\\n - Check out of the hotel.\\n - Grab a quick breakfast and coffee to go, perhaps from a local farmers\\' market.\\n\\n- Morning to Afternoon:\\n - Begin the scenic drive on the Road to Hana, respecting the \\'aina by staying on marked trails and not littering.\\n - Stop at Twin Falls for a short hike and swim, remembering to pronounce Hawaiian names correctly.\\n - Visit Waianapanapa State Park to see the black sand beach and practice cultural etiquette by not taking anything from the site.\\n\\n- Mid to Late Afternoon:\\n - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\\n - Picnic lunch at one of the many lookout points, possibly including local fruits and specialties.\\n\\n- Evening:\\n - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay, using Hawaiian phrases to express gratitude.\\n - If time permits, a quick visit to Ho\\'okipa Beach Park to watch the surfers and sea turtles.\\n\\n- Night:\\n - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\\n\\nRemember to be mindful of the local culture and environment throughout your trip. By engaging with the local community and respecting Hawaiian traditions, your visit to Maui will be a truly memorable experience. Enjoy your journey to this beautiful island paradise!',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.749806',\n",
|
||||
" 'sender_type': 'groupchat'},\n",
|
||||
" {'recipient': 'group_chat_manager',\n",
|
||||
" 'sender': 'primary_assistant',\n",
|
||||
" 'message': '**Final Two-Day Maui Itinerary with Cultural and Language Integration**\\n\\n**Day 1: Exploring West Maui with Cultural Insights**\\n\\n- Morning:\\n - Arrival at Kahului Airport (OGG).\\n - Pick up rental car.\\n - Breakfast at a local café, trying a Hawaiian breakfast specialty.\\n - Drive to Lahaina, a historic whaling village.\\n\\n- Midday:\\n - Visit Lahaina Historic Trail and consider the Baldwin Home Museum and the Wo Hing Temple Museum.\\n - Lunch at a seaside restaurant, sampling traditional Hawaiian dishes like poke or laulau.\\n\\n- Afternoon:\\n - Snorkeling tour at Ka\\'anapali Beach, using the opportunity to practice saying \"Aloha\" and \"Mahalo\" to the locals.\\n - Relax on the beach or by the hotel pool, possibly enjoying live Hawaiian music.\\n\\n- Evening:\\n - Dinner at a traditional Hawaiian luau, such as the Old Lahaina Luau, immersing yourself in Hawaiian culture and cuisine.\\n - Return to hotel for overnight stay.\\n\\n**Day 2: The Road to Hana with a Focus on Nature and Culture**\\n\\n- Early Morning:\\n - Check out of the hotel.\\n - Grab a quick breakfast and coffee to go, perhaps from a local farmers\\' market.\\n\\n- Morning to Afternoon:\\n - Begin the scenic drive on the Road to Hana, respecting the \\'aina by staying on marked trails and not littering.\\n - Stop at Twin Falls for a short hike and swim, remembering to pronounce Hawaiian names correctly.\\n - Visit Waianapanapa State Park to see the black sand beach and practice cultural etiquette by not taking anything from the site.\\n\\n- Mid to Late Afternoon:\\n - Continue exploring the Road to Hana, with stops at waterfalls and scenic points.\\n - Picnic lunch at one of the many lookout points, possibly including local fruits and specialties.\\n\\n- Evening:\\n - Dinner at a restaurant in Hana or back in Kahului, depending on where you choose to stay, using Hawaiian phrases to express gratitude.\\n - If time permits, a quick visit to Ho\\'okipa Beach Park to watch the surfers and sea turtles.\\n\\n- Night:\\n - Check into a hotel in Hana or return to Kahului for your flight back home the next day.\\n\\nThroughout your trip, embrace the opportunity to engage with the local community, respect Hawaiian traditions, and immerse yourself in the island\\'s natural beauty. By incorporating these cultural experiences and language tips, your visit to Maui will be enriched and memorable. Have a fantastic journey to this enchanting island paradise! TERMINATE',\n",
|
||||
" 'timestamp': '2024-02-07T12:34:35.759164',\n",
|
||||
" 'sender_type': 'groupchat'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"group_agent_work_flow.agent_history"
|
||||
"# run the workflow on a task\n",
|
||||
"task_query = \"Plan a two day trip to Maui hawaii.\"\n",
|
||||
"travel_workflow_manager.run(message=task_query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -390,7 +74,11 @@
|
|||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"# print the agent history\n",
|
||||
"print(len(travel_workflow_manager.agent_history), \"agent messages were involved in the conversation\")\n",
|
||||
"travel_workflow_manager.agent_history"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
@ -409,7 +97,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"name": "Default Workflow",
|
||||
"type": "autonomous",
|
||||
"sample_tasks": [
|
||||
"paint a picture of a glass of ethiopian coffee, freshly brewed in a tall glass cup, on a table right in front of a lush green forest scenery",
|
||||
"Plot the stock price of NVIDIA YTD."
|
||||
],
|
||||
"version": "0.0.1",
|
||||
"description": "Default workflow",
|
||||
"summary_method": "last",
|
||||
"agents": [
|
||||
{
|
||||
"agent": {
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "user_proxy",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful assistant",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "local",
|
||||
"default_auto_reply": "TERMINATE",
|
||||
"description": "User Proxy Agent Configuration",
|
||||
"llm_config": false,
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "userproxy",
|
||||
"task_instruction": null,
|
||||
"skills": [],
|
||||
"models": [],
|
||||
"agents": []
|
||||
},
|
||||
"link": {
|
||||
"agent_id": 52,
|
||||
"workflow_id": 19,
|
||||
"agent_type": "sender",
|
||||
"sequence_id": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"agent": {
|
||||
"version": "0.0.1",
|
||||
"config": {
|
||||
"name": "default_assistant",
|
||||
"human_input_mode": "NEVER",
|
||||
"max_consecutive_auto_reply": 25,
|
||||
"system_message": "You are a helpful AI assistant.\nSolve tasks using your coding and language skills.\nIn the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\n 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\n 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\nSolve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\nWhen using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\nIf you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\nIf the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\nWhen you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\nReply \"TERMINATE\" in the end when everything is done.\n ",
|
||||
"is_termination_msg": null,
|
||||
"code_execution_config": "none",
|
||||
"default_auto_reply": "",
|
||||
"description": "Assistant Agent",
|
||||
"llm_config": {
|
||||
"config_list": [
|
||||
{
|
||||
"api_type": "open_ai",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"temperature": 0,
|
||||
"cache_seed": null,
|
||||
"timeout": null,
|
||||
"max_tokens": 2048,
|
||||
"extra_body": null
|
||||
},
|
||||
"admin_name": "Admin",
|
||||
"messages": [],
|
||||
"max_round": 100,
|
||||
"speaker_selection_method": "auto",
|
||||
"allow_repeat_speaker": true
|
||||
},
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"type": "assistant",
|
||||
"task_instruction": null,
|
||||
"skills": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"name": "generate_images",
|
||||
"content": "\nfrom typing import List\nimport uuid\nimport requests # to perform HTTP requests\nfrom pathlib import Path\n\nfrom openai import OpenAI\n\n\ndef generate_and_save_images(query: str, image_size: str = \"1024x1024\") -> List[str]:\n \"\"\"\n Function to paint, draw or illustrate images based on the users query or request. Generates images from a given query using OpenAI's DALL-E model and saves them to disk. Use the code below anytime there is a request to create an image.\n\n :param query: A natural language description of the image to be generated.\n :param image_size: The size of the image to be generated. (default is \"1024x1024\")\n :return: A list of filenames for the saved images.\n \"\"\"\n\n client = OpenAI() # Initialize the OpenAI client\n response = client.images.generate(model=\"dall-e-3\", prompt=query, n=1, size=image_size) # Generate images\n\n # List to store the file names of saved images\n saved_files = []\n\n # Check if the response is successful\n if response.data:\n for image_data in response.data:\n # Generate a random UUID as the file name\n file_name = str(uuid.uuid4()) + \".png\" # Assuming the image is a PNG\n file_path = Path(file_name)\n\n img_url = image_data.url\n img_response = requests.get(img_url)\n if img_response.status_code == 200:\n # Write the binary content to a file\n with open(file_path, \"wb\") as img_file:\n img_file.write(img_response.content)\n print(f\"Image saved to {file_path}\")\n saved_files.append(str(file_path))\n else:\n print(f\"Failed to download the image from {img_url}\")\n else:\n print(\"No image data found in the response!\")\n\n # Return the list of saved files\n return saved_files\n\n\n# Example usage of the function:\n# generate_and_save_images(\"A cute baby sea otter\")\n",
|
||||
"description": "Generate and save images based on a user's query.",
|
||||
"secrets": {},
|
||||
"libraries": {}
|
||||
}
|
||||
],
|
||||
"models": [
|
||||
{
|
||||
"user_id": "guestuser@gmail.com",
|
||||
"api_type": "open_ai",
|
||||
"description": "OpenAI GPT-4 model",
|
||||
"model": "gpt-4-1106-preview",
|
||||
"base_url": null,
|
||||
"api_version": null
|
||||
}
|
||||
],
|
||||
"agents": []
|
||||
},
|
||||
"link": {
|
||||
"agent_id": 53,
|
||||
"workflow_id": 19,
|
||||
"agent_type": "receiver",
|
||||
"sequence_id": 0
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -24,7 +24,7 @@ dependencies = [
|
|||
"typer",
|
||||
"uvicorn",
|
||||
"arxiv",
|
||||
"pyautogen[gemini]>=0.2.0",
|
||||
"pyautogen[gemini,anthropic,mistral]>=0.2.0",
|
||||
"python-dotenv",
|
||||
"websockets",
|
||||
"numpy < 2.0.0",
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
import os
|
||||
|
||||
from autogenstudio.datamodel import Agent, Skill
|
||||
from autogenstudio.utils import utils
|
||||
|
||||
|
||||
class TestUtilSaveSkillsToFile:
|
||||
|
||||
def test_save_skills_to_file(self):
|
||||
|
||||
# cleanup test work_dir
|
||||
try:
|
||||
os.system("rm -rf work_dir")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Create two Agents, each with a skill
|
||||
skill_clazz = Skill(
|
||||
name="skill_clazz",
|
||||
description="skill_clazz",
|
||||
user_id="guestuser@gmail.com",
|
||||
libraries=["lib1.0", "lib1.1"],
|
||||
content="I am the skill clazz content",
|
||||
secrets=[{"secret": "secret_1", "value": "value_1"}],
|
||||
agents=[],
|
||||
)
|
||||
|
||||
skill_dict = Skill(
|
||||
name="skill_dict",
|
||||
description="skill_dict",
|
||||
user_id="guestuser@gmail.com",
|
||||
libraries=["lib2.0", "lib2.1"],
|
||||
content="I am the skill dict content",
|
||||
secrets=[{"secret": "secret_2", "value": "value_2"}],
|
||||
agents=[],
|
||||
)
|
||||
|
||||
Agent(skills=[skill_clazz])
|
||||
Agent(skills=[skill_dict])
|
||||
|
||||
# test from flow
|
||||
skills = [skill_dict.__dict__, skill_clazz]
|
||||
|
||||
utils.save_skills_to_file(skills, work_dir="work_dir")
|
||||
|
||||
f = open("work_dir/skills.py", "r")
|
||||
skills_content = f.read()
|
||||
|
||||
assert skills_content.find(skill_clazz.content)
|
||||
assert skills_content.find(skill_dict.content)
|
||||
|
||||
# cleanup test work_dir
|
||||
try:
|
||||
os.system("rm -rf work_dir")
|
||||
except Exception:
|
||||
pass
|
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
|
||||
from autogenstudio.datamodel import Skill
|
||||
from autogenstudio.utils import utils
|
||||
|
||||
|
||||
class TestUtilGetSkillsPrompt:
|
||||
|
||||
def test_get_skills_prompt(self):
|
||||
|
||||
skill_clazz = Skill(
|
||||
name="skill_clazz",
|
||||
description="skill_clazz",
|
||||
user_id="guestuser@gmail.com",
|
||||
libraries=["lib1.0", "lib1.1"],
|
||||
content="I am the skill clazz content",
|
||||
secrets=[{"secret": "secret_1", "value": "value_1"}],
|
||||
agents=[],
|
||||
)
|
||||
|
||||
skill_dict = Skill(
|
||||
name="skill_dict",
|
||||
description="skill_dict",
|
||||
user_id="guestuser@gmail.com",
|
||||
libraries=["lib2.0", "lib2.1"],
|
||||
content="I am the skill dict content",
|
||||
secrets=[{"secret": "secret_2", "value": "value_2"}],
|
||||
agents=[],
|
||||
)
|
||||
|
||||
skills = [skill_dict.__dict__, skill_clazz]
|
||||
|
||||
prompt = utils.get_skills_prompt(skills, work_dir="work_dir")
|
||||
|
||||
# test that prompt contains contents of skills class and dict
|
||||
assert prompt.find(skill_clazz.content) > 0
|
||||
assert prompt.find(skill_dict.content) > 0
|
||||
|
||||
# test that secrets are set in environ
|
||||
assert os.getenv("secret_1") == "value_1"
|
||||
assert os.getenv("secret_2") == "value_2"
|
||||
|
||||
# cleanup test work_dir
|
||||
try:
|
||||
os.system("rm -rf work_dir")
|
||||
except Exception:
|
||||
pass
|
Loading…
Reference in New Issue