mirror of https://github.com/microsoft/autogen.git
update (#2655)
This commit is contained in:
parent
b5d856dd36
commit
e509549a01
|
@ -4,6 +4,12 @@
|
|||
|
||||
AutoGen.Net provides the following packages, you can choose to install one or more of them based on your needs:
|
||||
|
||||
> [!Note]
|
||||
> The `AutoGen.DotnetInteractive` has a dependency on `Microsoft.DotNet.Interactive.VisualStudio` which is not available on nuget.org. To restore the dependency, you need to add the following package source to your project:
|
||||
> ```bash
|
||||
> https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json
|
||||
> ```
|
||||
|
||||
- `AutoGen`: The one-in-all package. This package has dependencies over `AutoGen.Core`, `AutoGen.OpenAI`, `AutoGen.LMStudio`, `AutoGen.SemanticKernel` and `AutoGen.SourceGenerator`.
|
||||
- `AutoGen.Core`: The core package, this package provides the abstraction for message type, agent and group chat.
|
||||
- `AutoGen.OpenAI`: This package provides the integration agents over openai models.
|
||||
|
|
|
@ -4,7 +4,7 @@ The following example shows how to create a `MistralAITokenCounterMiddleware` @A
|
|||
To collect the token usage for the entire chat session, one easy solution is simply collect all the responses from agent and sum up the token usage for each response. To collect all the agent responses, we can create a middleware which simply saves all responses to a list and register it with the agent. To get the token usage information for each response, because in the example we are using @AutoGen.Mistral.MistralClientAgent, we can simply get the token usage from the response object.
|
||||
|
||||
> [!NOTE]
|
||||
> You can find the complete example in the [Example13_OpenAIAgent_JsonMode](https://github.com/microsoft/autogen/tree/dotnet/dotnet/sample/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs).
|
||||
> You can find the complete example in the [Example13_OpenAIAgent_JsonMode](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs).
|
||||
|
||||
- Step 1: Adding using statement
|
||||
[!code-csharp[](../../sample/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=using_statements)]
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
The following example shows how to connect to third-party OpenAI API using @AutoGen.OpenAI.OpenAIChatAgent.
|
||||
|
||||
> [!NOTE]
|
||||
> You can find the complete code of this example in [Example16_OpenAIChatAgent_ConnectToThirdPartyBackend](https://github.com/microsoft/autogen/tree/dotnet/dotnet/sample/AutoGen.BasicSamples/Example16_OpenAIChatAgent_ConnectToThirdPartyBackend.cs).
|
||||
> You can find the complete code of this example in [Example16_OpenAIChatAgent_ConnectToThirdPartyBackend](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.BasicSamples/Example16_OpenAIChatAgent_ConnectToThirdPartyBackend.cs).
|
||||
|
||||
## Overview
|
||||
A lot of LLM applications/platforms support spinning up a chat server that is compatible with OpenAI API, such as LM Studio, Ollama, Mistral etc. This means that you can connect to these servers using the @AutoGen.OpenAI.OpenAIChatAgent.
|
||||
|
|
|
@ -9,7 +9,7 @@ JSON mode is a new feature in OpenAI which allows you to instruct model to alway
|
|||
## How to enable JSON mode in OpenAIChatAgent.
|
||||
|
||||
> [!NOTE]
|
||||
> You can find the complete example in the [Example13_OpenAIAgent_JsonMode](https://github.com/microsoft/autogen/tree/dotnet/dotnet/sample/AutoGen.BasicSamples/Example13_OpenAIAgent_JsonMode.cs).
|
||||
> You can find the complete example in the [Example13_OpenAIAgent_JsonMode](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.BasicSamples/Example13_OpenAIAgent_JsonMode.cs).
|
||||
|
||||
To enable JSON mode for @AutoGen.OpenAI.OpenAIChatAgent, set `responseFormat` to `ChatCompletionsResponseFormat.JsonObject` when creating the agent. Note that when enabling JSON mode, you also need to instruct the agent to output JSON format in its system message.
|
||||
|
||||
|
|
|
@ -19,6 +19,12 @@ For example, in data analysis scenario, agent can resolve tasks like "What is th
|
|||
## How to run dotnet code snippet?
|
||||
The built-in feature of running dotnet code snippet is provided by [dotnet-interactive](https://github.com/dotnet/interactive). To run dotnet code snippet, you need to install the following package to your project, which provides the intergraion with dotnet-interactive:
|
||||
|
||||
> [!Note]
|
||||
> The `AutoGen.DotnetInteractive` has a dependency on `Microsoft.DotNet.Interactive.VisualStudio` which is not available on nuget.org. To restore the dependency, you need to add the following package source to your project:
|
||||
> ```bash
|
||||
> https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json
|
||||
> ```
|
||||
|
||||
```xml
|
||||
<PackageReference Include="AutoGen.DotnetInteractive" />
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue