mirror of https://github.com/microsoft/autogen.git
Fix documentation (#1075)
* Fix indentation in documentation * newline * version
This commit is contained in:
parent
5387a0a607
commit
a30d198530
|
@ -14,8 +14,10 @@
|
|||
<br>
|
||||
</p>
|
||||
|
||||
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web)
|
||||
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
|
||||
|
||||
:fire: [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
|
||||
|
||||
:fire: FLAML supports AutoML and Hyperparameter Tuning features in [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) private preview. Sign up for these features at: https://aka.ms/fabric/data-science/sign-up.
|
||||
|
||||
|
||||
|
|
|
@ -44,7 +44,7 @@
|
|||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install flaml[autogen]"
|
||||
"# %pip install flaml[autogen]==2.0.0rc1"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -44,7 +44,7 @@
|
|||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install flaml[autogen]"
|
||||
"# %pip install flaml[autogen]==2.0.0rc1"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -44,7 +44,7 @@
|
|||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install flaml[autogen]"
|
||||
"# %pip install flaml[autogen]==2.0.0rc1"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
@ -23,30 +23,30 @@ There are several ways of using flaml:
|
|||
#### (New) [Auto Generation](/docs/Use-Cases/Auto-Generation)
|
||||
|
||||
Maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4, including:
|
||||
- A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
||||
```python
|
||||
from flaml import oai
|
||||
- A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
||||
```python
|
||||
from flaml import oai
|
||||
|
||||
# perform tuning
|
||||
config, analysis = oai.Completion.tune(
|
||||
data=tune_data,
|
||||
metric="success",
|
||||
mode="max",
|
||||
eval_func=eval_func,
|
||||
inference_budget=0.05,
|
||||
optimization_budget=3,
|
||||
num_samples=-1,
|
||||
)
|
||||
# perform tuning
|
||||
config, analysis = oai.Completion.tune(
|
||||
data=tune_data,
|
||||
metric="success",
|
||||
mode="max",
|
||||
eval_func=eval_func,
|
||||
inference_budget=0.05,
|
||||
optimization_budget=3,
|
||||
num_samples=-1,
|
||||
)
|
||||
|
||||
# perform inference for a test instance
|
||||
response = oai.Completion.create(context=test_instance, **config)
|
||||
```
|
||||
- LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,
|
||||
```python
|
||||
assistant = AssistantAgent("assistant")
|
||||
user = UserProxyAgent("user", human_input_mode="TERMINATE")
|
||||
assistant.receive("Draw a rocket and save to a file named 'rocket.svg'")
|
||||
```
|
||||
# perform inference for a test instance
|
||||
response = oai.Completion.create(context=test_instance, **config)
|
||||
```
|
||||
- LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,
|
||||
```python
|
||||
assistant = AssistantAgent("assistant")
|
||||
user = UserProxyAgent("user", human_input_mode="TERMINATE")
|
||||
assistant.receive("Draw a rocket and save to a file named 'rocket.svg'")
|
||||
```
|
||||
|
||||
#### [Task-oriented AutoML](/docs/Use-Cases/task-oriented-automl)
|
||||
|
||||
|
|
Loading…
Reference in New Issue