Update FAQ section in documentation (#390)

* UPDATE - FAQ section in documentation

* FIX - formatting test failure

* FIX - added disclaimer

* pre-commit

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* UPDATE - notebook and FAQ information for config_list_from_models

---------

Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
This commit is contained in:
Aaron 2023-10-27 13:52:26 +01:00 committed by GitHub
parent 37deabefcf
commit cde99e05c5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 29 additions and 81 deletions

View File

@ -281,7 +281,7 @@
"source": [
"## config_list_from_models\n",
"\n",
"This method creates configurations based on a provided list of models. It's useful when you have specific models in mind and don't want to manually specify each configuration.\n",
"This method creates configurations based on a provided list of models. It's useful when you have specific models in mind and don't want to manually specify each configuration. The [`config_list_from_models`](/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files. It's okay to only have the OpenAI API key, OR only the Azure OpenAI API key + base. For Azure the model name refers to the OpenAI Studio deployment name.\n",
"\n",
"Steps:\n",
"- Similar to method 1, store API keys and bases either in environment variables or `.txt` files."

View File

@ -2,88 +2,15 @@
## Set your API endpoints
There are multiple ways to construct a list of configurations for LLM inference.
There are multiple ways to construct configurations for LLM inference in the `oai` utilities:
### Option 1: Load a list of endpoints from json
- `get_config_list`: Generates configurations for API calls, primarily from provided API keys.
- `config_list_openai_aoai`: Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files.
- `config_list_from_json`: Loads configurations from a JSON structure, either from an environment variable or a local JSON file, with the flexibility of filtering configurations based on given criteria.
- `config_list_from_models`: Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration.
- `config_list_from_dotenv`: Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file.
The [`config_list_from_json`](/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.
For example,
```python
import autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
file_location=".",
filter_dict={
"model": {
"gpt-4",
"gpt-3.5-turbo",
}
}
)
```
It first looks for environment variable "OAI_CONFIG_LIST" which needs to be a valid json string. If that variable is not found, it then looks for a json file named "OAI_CONFIG_LIST" under the specified `file_location`. It then filters the configs by models (you can filter by other keys as well).
The `OAI_CONFIG_LIST` var or file content looks like the following:
```json
[
{
"model": "gpt-4",
"api_key": "<your OpenAI API key here>"
},
{
"model": "gpt-4",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
},
{
"model": "gpt-3.5-turbo",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
}
]
```
### Option 2: Construct a list of endpoints for OpenAI or Azure OpenAI
The [`config_list_from_models`](/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:
- OpenAI API key: os.environ["OPENAI_API_KEY"] or `openai_api_key_file="key_openai.txt"`.
- Azure OpenAI API key: os.environ["AZURE_OPENAI_API_KEY"] or `aoai_api_key_file="key_aoai.txt"`. Multiple keys can be stored, one per line.
- Azure OpenAI API base: os.environ["AZURE_OPENAI_API_BASE"] or `aoai_api_base_file="base_aoai.txt"`. Multiple bases can be stored, one per line.
It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.
```python
import autogen
config_list = autogen.config_list_from_models(model_list=["gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k"])
```
> For Azure the model name refers to the OpenAI Studio deployment name.
The config list looks like the following, if only OpenAI API key is available:
```python
config_list = [
{
'model': 'gpt-4',
'api_key': '<your OpenAI API key here>',
}, # OpenAI API endpoint for gpt-4
{
'model': 'gpt-3.5-turbo',
'api_key': '<your OpenAI API key here>',
}, # OpenAI API endpoint for gpt-3.5-turbo
{
'model': 'gpt-3.5-turbo-16k',
'api_key': '<your OpenAI API key here>',
}, # OpenAI API endpoint for gpt-3.5-turbo-16k
]
```
We suggest that you take a look at this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods to configure your model endpoints.
### Use the constructed configuration list in agents
@ -160,3 +87,24 @@ Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
```
If you have problems with agents running `pip install` or get errors similar to `Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')`, you can choose **'python:3'** as image as shown in the code example above and that should solve the problem.
### Agents keep thanking each other when using `gpt-3.5-turbo`
When using `gpt-3.5-turbo` you may often encounter agents going into a "gratitude loop", meaning when they complete a task they will begin congratulating and thanking eachother in a continuous loop. This is a limitation in the performance of `gpt-3.5-turbo`, in contrast to `gpt-4` which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.
A workaround is to add an additional termination notice to the prompt. This acts a "little nudge" for the LLM to remember that they need to terminate the conversation when their task is complete. You can do this by appending a string such as the following to your user input string:
```python
prompt = "Some user query"
termination_notice = (
'\n\nDo not show appreciation in your responses, say only what is necessary. '
'if "Thank you" or "You\'re welcome" are said in the conversation, then say TERMINATE '
'to indicate the conversation is finished and this is your last message.'
)
prompt += termination_notice
```
**Note**: This workaround gets the job done around 90% of the time, but there are occurences where the LLM still forgets to terminate the conversation.