mirror of https://github.com/microsoft/autogen.git
update (#2178)
Co-authored-by: AnonymousRepoSub <“shaokunzhang529@outlook.com” >
This commit is contained in:
parent
e6237d44a1
commit
dd61eaae43
|
@ -12,10 +12,12 @@ tags: [LLM, research]
|
|||
**TL;DR:**
|
||||
Introducing **AgentOptimizer**, a new class for training LLM agents in the era of LLMs as a service.
|
||||
**AgentOptimizer** is able to prompt LLMs to iteratively optimize function/skills of AutoGen agents according to the historical conversation and performance.
|
||||
Checkout one implementation for **AgentOptimizer** on [MATH](https://github.com/hendrycks/math) dataset
|
||||
[here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb).
|
||||
|
||||
More information could be found in the [paper](https://arxiv.org/abs/2402.11359).
|
||||
More information could be found in:
|
||||
|
||||
**Paper**: https://arxiv.org/abs/2402.11359.
|
||||
|
||||
**Notebook**: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb.
|
||||
|
||||
## Introduction
|
||||
In the traditional ML pipeline, we train a model by updating its weights according to the loss on the training set, while in the era of LLM agents, how should we train an agent?
|
||||
|
|
Loading…
Reference in New Issue