mirror of https://github.com/microsoft/autogen.git
docs: initial Jupyter support for website docs, move config notebook (#1448)
* docs: jupyter support for website docs, move config docs * update devcontainer dockerfile, fix CI issues * update bsed on working directory * update TLDR * install into temp * remove unrelated change from diff * fix spelling issue * Update sidebars.js
This commit is contained in:
parent
8d1d07308a
commit
276d5b7d31
|
@ -4,7 +4,7 @@ FROM python:3.11-slim-bookworm
|
|||
# Update and install necessary packages
|
||||
RUN apt-get update && apt-get -y update
|
||||
# added vim and nano for convenience
|
||||
RUN apt-get install -y sudo git npm vim nano curl
|
||||
RUN apt-get install -y sudo git npm vim nano curl wget
|
||||
|
||||
# Setup a non-root user 'autogen' with sudo access
|
||||
RUN adduser --disabled-password --gecos '' autogen
|
||||
|
@ -32,6 +32,14 @@ RUN sudo pip install pydoc-markdown
|
|||
RUN cd website
|
||||
RUN yarn install --frozen-lockfile --ignore-engines
|
||||
|
||||
RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
|
||||
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.4.549/quarto-1.4.549-linux-${arch}.tar.gz && \
|
||||
mkdir -p /home/autogen/quarto/ && \
|
||||
tar -xzf quarto-1.4.549-linux-${arch}.tar.gz --directory /home/autogen/quarto/ && \
|
||||
rm quarto-1.4.549-linux-${arch}.tar.gz
|
||||
|
||||
ENV PATH="${PATH}:/home/autogen/quarto/quarto-1.4.549/bin/"
|
||||
|
||||
# Exposes the Yarn port for Docusaurus
|
||||
EXPOSE 3000
|
||||
|
||||
|
|
|
@ -41,6 +41,15 @@ jobs:
|
|||
- name: pydoc-markdown run
|
||||
run: |
|
||||
pydoc-markdown
|
||||
- name: quarto install
|
||||
working-directory: ${{ runner.temp }}
|
||||
run: |
|
||||
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.4.549/quarto-1.4.549-linux-amd64.tar.gz
|
||||
tar -xzf quarto-1.4.549-linux-amd64.tar.gz
|
||||
echo "$(pwd)/quarto-1.4.549/bin/" >> $GITHUB_PATH
|
||||
- name: quarto run
|
||||
run: |
|
||||
quarto render .
|
||||
- name: Test Build
|
||||
run: |
|
||||
if [ -e yarn.lock ]; then
|
||||
|
@ -75,6 +84,15 @@ jobs:
|
|||
- name: pydoc-markdown run
|
||||
run: |
|
||||
pydoc-markdown
|
||||
- name: quarto install
|
||||
working-directory: ${{ runner.temp }}
|
||||
run: |
|
||||
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.4.549/quarto-1.4.549-linux-amd64.tar.gz
|
||||
tar -xzf quarto-1.4.549-linux-amd64.tar.gz
|
||||
echo "$(pwd)/quarto-1.4.549/bin/" >> $GITHUB_PATH
|
||||
- name: quarto run
|
||||
run: |
|
||||
quarto render .
|
||||
- name: Build website
|
||||
run: |
|
||||
if [ -e yarn.lock ]; then
|
||||
|
|
|
@ -10,6 +10,8 @@ package-lock.json
|
|||
.cache-loader
|
||||
docs/reference
|
||||
|
||||
docs/llm_endpoint_configuration.mdx
|
||||
|
||||
# Misc
|
||||
.DS_Store
|
||||
.env.local
|
||||
|
@ -20,3 +22,5 @@ docs/reference
|
|||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
/.quarto/
|
||||
|
|
|
@ -19,12 +19,19 @@ cd website
|
|||
yarn install
|
||||
```
|
||||
|
||||
### Install Quarto
|
||||
|
||||
`quarto` is used to render notebooks.
|
||||
|
||||
Install it [here](https://quarto.org/docs/get-started/).
|
||||
|
||||
## Local Development
|
||||
|
||||
Navigate to the website folder and run:
|
||||
Navigate to the `website` folder and run:
|
||||
|
||||
```console
|
||||
pydoc-markdown
|
||||
quarto render ./docs
|
||||
yarn start
|
||||
```
|
||||
|
||||
|
|
|
@ -4,20 +4,57 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# In-depth Guide to OpenAI Utility Functions\n",
|
||||
"---\n",
|
||||
"custom_edit_url: https://github.com/microsoft/autogen/edit/main/website/docs/llm_endpoint_configuration.ipynb\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"# LLM Endpoint Configuration\n",
|
||||
"\n",
|
||||
"## TL;DR\n",
|
||||
"\n",
|
||||
"For just getting started with AutoGen you can use the following to define your LLM endpoint configuration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import autogen\n",
|
||||
"\n",
|
||||
"config_list = autogen.get_config_list([\"YOUR_OPENAI_API_KEY\"], api_type=\"openai\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\:\\:\\:danger \n",
|
||||
"\n",
|
||||
"Never commit secrets into your code. Before committing, change the code to use a different way of providing your API keys as described below.\n",
|
||||
"\n",
|
||||
"\\:\\:\\:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"## In-depth\n",
|
||||
"\n",
|
||||
"Managing API configurations can be tricky, especially when dealing with multiple models and API versions. The provided utility functions assist users in managing these configurations effectively. Ensure your API keys and other sensitive data are stored securely. You might store keys in `.txt` or `.env` files or environment variables for local development. Never expose your API keys publicly. If you insist on storing your key files locally on your repo (you shouldn't), ensure the key file path is added to the `.gitignore` file.\n",
|
||||
"\n",
|
||||
"#### Steps:\n",
|
||||
"### Steps:\n",
|
||||
"1. Obtain API keys from OpenAI and optionally from Azure OpenAI (or other provider).\n",
|
||||
"2. Store them securely using either:\n",
|
||||
" - Environment Variables: `export OPENAI_API_KEY='your-key'` in your shell.\n",
|
||||
" - Text File: Save the key in a `key_openai.txt` file.\n",
|
||||
" - Env File: Save the key to a `.env` file eg: `OPENAI_API_KEY=sk-********************`\n",
|
||||
"3. [Ensure `pyautogen` is installed](./installation/Installation.mdx)\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n",
|
||||
"**TL;DR:** <br>\n",
|
||||
"There are many ways to generate a `config_list` depending on your use case:\n",
|
||||
"\n",
|
||||
"- `get_config_list`: Generates configurations for API calls, primarily from provided API keys.\n",
|
||||
|
@ -26,14 +63,14 @@
|
|||
"- `config_list_from_models`: Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration.\n",
|
||||
"- `config_list_from_dotenv`: Constructs a configuration list from a `.env` file, offering a consolidated way to manage multiple API configurations and keys from a single file.\n",
|
||||
"\n",
|
||||
"If mutiple models are provided, the Autogen client (`OpenAIWrapper`) and agents don't choose the \"best model\" on any criteria - inference is done through the very first model and the next one is used only if the current model fails (e.g. API throttling by the provider or a filter condition is unsatisfied)."
|
||||
"If multiple models are provided, the Autogen client (`OpenAIWrapper`) and agents don't choose the \"best model\" on any criteria - inference is done through the very first model and the next one is used only if the current model fails (e.g. API throttling by the provider or a filter condition is unsatisfied)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### What is a `config_list`?\n",
|
||||
"### What is a `config_list`?\n",
|
||||
"When instantiating an assistant, such as the example below, it is passed a `config_list`. This is used to tell the `AssistantAgent` which models or configurations it has access to:\n",
|
||||
"```python\n",
|
||||
"\n",
|
||||
|
@ -56,24 +93,6 @@
|
|||
"Different tasks may require different models, and the `config_list` aids in dynamically selecting the appropriate model configuration, managing API keys, endpoints, and versions for efficient operation of the intelligent assistant. In summary, the `config_list` helps the agents work efficiently, reliably, and optimally by managing various configurations and interactions with the OpenAI API - enhancing the adaptability and functionality of the agents."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install \"pyautogen>=0.2.3\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import autogen"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
|
@ -22,6 +22,7 @@
|
|||
id: "installation/Installation"
|
||||
},
|
||||
},
|
||||
'llm_endpoint_configuration',
|
||||
{'Use Cases': [{type: 'autogenerated', dirName: 'Use-Cases'}]},
|
||||
'Contribute',
|
||||
'Research',
|
||||
|
|
Loading…
Reference in New Issue