web-template
|
@ -0,0 +1,31 @@
|
|||
# Website
|
||||
|
||||
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/).
|
||||
On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js:
|
||||
|
||||
```console
|
||||
npm install --global yarn
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```console
|
||||
pip install pydoc-markdown
|
||||
cd website
|
||||
yarn install
|
||||
```
|
||||
|
||||
## Local Development
|
||||
|
||||
Navigate to the website folder and run:
|
||||
|
||||
```console
|
||||
pydoc-markdown
|
||||
yarn start
|
||||
```
|
||||
|
||||
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
|
|
@ -0,0 +1,3 @@
|
|||
module.exports = {
|
||||
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
|
||||
};
|
|
@ -0,0 +1,15 @@
|
|||
# AutoGen - Automated Multi Agent Chat
|
||||
|
||||
`flaml.autogen` offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framwork allows tool use and human participance via multi-agent conversation.
|
||||
Please find documentation about this feature [here](/docs/Use-Cases/Autogen#agents).
|
||||
|
||||
Links to notebook examples:
|
||||
* [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_auto_feedback_from_code_execution.ipynb)
|
||||
* [Auto Code Generation, Execution, Debugging and Human Feedback](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_human_feedback.ipynb)
|
||||
* [Solve Tasks Requiring Web Info](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_web_info.ipynb)
|
||||
* [Use Provided Tools as Functions](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_function_call.ipynb)
|
||||
* [Automated Task Solving with Coding & Planning Agents](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_planning.ipynb)
|
||||
* [Automated Task Solving with GPT-4 + Multiple Human Users](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_two_users.ipynb)
|
||||
* [Automated Chess Game Playing & Chitchatting by GPT-4 Agents](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_chess.ipynb)
|
||||
* [Automated Task Solving by Group Chat](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_groupchat.ipynb)
|
||||
* [Automated Continual Learning from New Data](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_stream.ipynb)
|
|
@ -0,0 +1,8 @@
|
|||
# AutoGen - Tune GPT Models
|
||||
|
||||
`flaml.autogen` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. The research study finds that tuning hyperparameters can significantly improve the utility of them.
|
||||
Please find documentation about this feature [here](/docs/Use-Cases/Autogen#enhanced-inference).
|
||||
|
||||
Links to notebook examples:
|
||||
* [Optimize for Code Generation](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_openai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_openai_completion.ipynb)
|
||||
* [Optimize for Math](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_chatgpt_gpt4.ipynb)
|
|
@ -0,0 +1,552 @@
|
|||
# AutoGen: Enabling Next-Gen GPT-X Applications
|
||||
|
||||
`flaml.autogen` simplifies the orchestration, automation and optimization of a complex GPT-X workflow. It maximizes the performance of GPT-X models and augments their weakness. It enables building next-gen GPT-X applications based on multi-agent conversations with minimal effort.
|
||||
|
||||
## Features
|
||||
|
||||
* A unified multi-agent conversation framework as a high-level abstraction of using foundation models. It offers customizable and conversable agents which integrate LLM, tool and human.
|
||||
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
|
||||
* A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` as an enhanced inference API. It allows easy performance tuning, utilities like API unification & caching, and advanced usage patterns, such as error handling, multi-config inference, context programming etc.
|
||||
|
||||
The package is under active development with more features upcoming.
|
||||
|
||||
## Agents
|
||||
|
||||
[`flaml.autogen.agentchat`](/docs/reference/autogen/agentchat/agent) offers a multi-agent conversation framework, featuring capable, customizable and conversable agents which integrate LLM, tool and human via automated agent chat.
|
||||
|
||||
### Basic Concept
|
||||
|
||||
We have designed a generic `ResponsiveAgent` class for Agents that are capable of conversing with each other through the exchange of messages to jointly finish a task. An agent can communicate with other agents and perform actions. Different agents can differ in what actions they perform after receiving messages. Two representative subclasses are `AssistantAgent` and `UserProxyAgent`.
|
||||
|
||||
- `AssistantAgent`. Designed to act as an assistant by responding to user requests. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received. Under the hood, the Python code is written by LLM (e.g., GPT-4). It can also receive the execution results and suggest code with bug fix. Its behavior can be altered by passing a new system message. The LLM [inference](#enhanced-inference) configuration can be configured via `llm_config`.
|
||||
- `UserProxyAgent`. Serves as a proxy for the human user. Upon receiving a message, the UserProxyAgent will either solicit the human user's input or prepare an automatically generated reply. The chosen action depends on the settings of the `human_input_mode` and `max_consecutive_auto_reply` when the `UserProxyAgent` instance is constructed, and whether a human user input is available.
|
||||
By default, the automatically generated reply is crafted based on automatic code execution. The `UserProxyAgent` triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided. Code execution can be disabled by setting `code_execution_config` to False. LLM-based response is disabled by default. It can be enabled by setting `llm_config` to a dict corresponding to the [inference](#enhanced-inference) configuration.
|
||||
When `llm_config` is set to a dict, `UserProxyAgent` can generate replies using an LLM when code execution is not performed.
|
||||
|
||||
The auto-reply capability of `ResponsiveAgent` allows for more autonomous multi-agent communication while retaining the possibility of human intervention.
|
||||
One can also easily extend it by registering auto_reply functions with the `register_auto_reply()` method.
|
||||
|
||||
### Basic Example
|
||||
|
||||
Example usage of the agents to solve a task with code:
|
||||
```python
|
||||
from flaml.autogen import AssistantAgent, UserProxyAgent
|
||||
|
||||
# create an AssistantAgent instance named "assistant"
|
||||
assistant = AssistantAgent(name="assistant")
|
||||
|
||||
# create a UserProxyAgent instance named "user_proxy"
|
||||
user_proxy = UserProxyAgent(
|
||||
name="user_proxy",
|
||||
human_input_mode="NEVER", # in this mode, the agent will never solicit human input but always auto reply
|
||||
)
|
||||
|
||||
# the assistant receives a message from the user, which contains the task description
|
||||
user.initiate_chat(
|
||||
assistant,
|
||||
message="""What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?""",
|
||||
)
|
||||
```
|
||||
In the example above, we create an AssistantAgent named "assistant" to serve as the assistant and a UserProxyAgent named "user_proxy" to serve as a proxy for the human user.
|
||||
1. The assistant receives a message from the user_proxy, which contains the task description.
|
||||
2. The assistant then tries to write Python code to solve the task and sends the response to the user_proxy.
|
||||
3. Once the user_proxy receives a response from the assistant, it tries to reply by either soliciting human input or preparing an automatically generated reply. In this specific example, since `human_input_mode` is set to `"NEVER"`, the user_proxy will not solicit human input but send an automatically generated reply (auto reply). More specifically, the user_proxy executes the code and uses the result as the auto-reply.
|
||||
4. The assistant then generates a further response for the user_proxy. The user_proxy can then decide whether to terminate the conversation. If not, steps 3 and 4 are repeated.
|
||||
|
||||
Please find a visual illustration of how UserProxyAgent and AssistantAgent collaboratively solve the above task below:
|
||||
![Agent Chat Example](images/agent_example.png)
|
||||
|
||||
### Human Input Mode
|
||||
|
||||
The `human_input_mode` parameter of `UserProxyAgent` controls the behavior of the agent when it receives a message. It can be set to `"NEVER"`, `"ALWAYS"`, or `"TERMINATE"`.
|
||||
- Under the mode `human_input_mode="NEVER"`, the multi-turn conversation between the assistant and the user_proxy stops when the number of auto-reply reaches the upper limit specified by `max_consecutive_auto_reply` or the received message is a termination message according to `is_termination_msg`.
|
||||
- When `human_input_mode` is set to `"ALWAYS"`, the user proxy agent solicits human input every time a message is received; and the conversation stops when the human input is "exit", or when the received message is a termination message and no human input is provided.
|
||||
- When `human_input_mode` is set to `"TERMINATE"`, the user proxy agent solicits human input only when a termination message is received or the number of auto replies reaches `max_consecutive_auto_reply`.
|
||||
|
||||
### Function Calling
|
||||
To leverage [function calling capability of OpenAI's Chat Completions API](https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com), one can pass in a list of callable functions or class methods to `UserProxyAgent`, which corresponds to the description of functions passed to OpenAI's API.
|
||||
|
||||
Example usage of the agents to solve a task with function calling feature:
|
||||
```python
|
||||
from flaml.autogen import AssistantAgent, UserProxyAgent
|
||||
|
||||
# put the descriptions of functions in config to be passed to OpenAI's API
|
||||
llm_config = {
|
||||
"model": "gpt-4-0613",
|
||||
"functions": [
|
||||
{
|
||||
"name": "python",
|
||||
"description": "run cell in ipython and return the execution result.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cell": {
|
||||
"type": "string",
|
||||
"description": "Valid Python cell to execute.",
|
||||
}
|
||||
},
|
||||
"required": ["cell"],
|
||||
},
|
||||
},
|
||||
{
|
||||
"name": "sh",
|
||||
"description": "run a shell script and return the execution result.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"script": {
|
||||
"type": "string",
|
||||
"description": "Valid shell script to execute.",
|
||||
}
|
||||
},
|
||||
"required": ["script"],
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
# create an AssistantAgent instance named "assistant"
|
||||
chatbot = AssistantAgent("assistant", **llm_config)
|
||||
|
||||
# create a UserProxyAgent instance named "user_proxy"
|
||||
user_proxy = UserProxyAgent(
|
||||
"user_proxy",
|
||||
human_input_mode="NEVER",
|
||||
)
|
||||
|
||||
# define functions according to the function desription
|
||||
from IPython import get_ipython
|
||||
|
||||
def exec_python(cell):
|
||||
ipython = get_ipython()
|
||||
result = ipython.run_cell(cell)
|
||||
log = str(result.result)
|
||||
if result.error_before_exec is not None:
|
||||
log += f"\n{result.error_before_exec}"
|
||||
if result.error_in_exec is not None:
|
||||
log += f"\n{result.error_in_exec}"
|
||||
return log
|
||||
|
||||
def exec_sh(script):
|
||||
return user_proxy.execute_code_blocks([("sh", script)])
|
||||
|
||||
# register the functions
|
||||
user_proxy.register_function(
|
||||
function_map={
|
||||
"python": exec_python,
|
||||
"sh": exec_sh,
|
||||
}
|
||||
)
|
||||
|
||||
# start the conversation
|
||||
user_proxy.initiate_chat(
|
||||
chatbot,
|
||||
message="Draw two agents chatting with each other with an example dialog.",
|
||||
)
|
||||
```
|
||||
|
||||
### Notebook Examples
|
||||
|
||||
*Interested in trying it yourself? Please check the following notebook examples:*
|
||||
* [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_auto_feedback_from_code_execution.ipynb)
|
||||
* [Auto Code Generation, Execution, Debugging and Human Feedback](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_human_feedback.ipynb)
|
||||
* [Solve Tasks Requiring Web Info](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_web_info.ipynb)
|
||||
* [Use Provided Tools as Functions](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_function_call.ipynb)
|
||||
* [Automated Task Solving with Coding & Planning Agents](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_planning.ipynb)
|
||||
* [Automated Task Solving with GPT-4 + Multiple Human Users](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_two_users.ipynb)
|
||||
* [Automated Chess Game Playing & Chitchatting by GPT-4 Agents](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_chess.ipynb)
|
||||
* [Automated Task Solving by Group Chat](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_groupchat.ipynb)
|
||||
* [Automated Continual Learning from New Data](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agentchat_stream.ipynb)
|
||||
|
||||
## Enhanced Inference
|
||||
|
||||
One can use [`flaml.autogen.Completion.create`](/docs/reference/autogen/oai/completion#create) to perform inference.
|
||||
There are a number of benefits of using `autogen` to perform inference: performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating and so on.
|
||||
|
||||
### Tune Inference Parameters
|
||||
|
||||
*Links to notebook examples:*
|
||||
* [Optimize for Code Generation](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_openai_completion.ipynb)
|
||||
* [Optimize for Math](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_chatgpt_gpt4.ipynb)
|
||||
|
||||
#### Choices to optimize
|
||||
|
||||
The cost of using foundation models for text generation is typically measured in terms of the number of tokens in the input and output combined. From the perspective of an application builder using foundation models, the use case is to maximize the utility of the generated text under an inference budget constraint (e.g., measured by the average dollar cost needed to solve a coding problem). This can be achieved by optimizing the hyperparameters of the inference,
|
||||
which can significantly affect both the utility and the cost of the generated text.
|
||||
|
||||
The tunable hyperparameters include:
|
||||
1. model - this is a required input, specifying the model ID to use.
|
||||
1. prompt/messages - the input prompt/messages to the model, which provides the context for the text generation task.
|
||||
1. max_tokens - the maximum number of tokens (words or word pieces) to generate in the output.
|
||||
1. temperature - a value between 0 and 1 that controls the randomness of the generated text. A higher temperature will result in more random and diverse text, while a lower temperature will result in more predictable text.
|
||||
1. top_p - a value between 0 and 1 that controls the sampling probability mass for each token generation. A lower top_p value will make it more likely to generate text based on the most likely tokens, while a higher value will allow the model to explore a wider range of possible tokens.
|
||||
1. n - the number of responses to generate for a given prompt. Generating multiple responses can provide more diverse and potentially more useful output, but it also increases the cost of the request.
|
||||
1. stop - a list of strings that, when encountered in the generated text, will cause the generation to stop. This can be used to control the length or the validity of the output.
|
||||
1. presence_penalty, frequency_penalty - values that control the relative importance of the presence and frequency of certain words or phrases in the generated text.
|
||||
1. best_of - the number of responses to generate server-side when selecting the "best" (the one with the highest log probability per token) response for a given prompt.
|
||||
|
||||
The cost and utility of text generation are intertwined with the joint effect of these hyperparameters.
|
||||
There are also complex interactions among subsets of the hyperparameters. For example,
|
||||
the temperature and top_p are not recommended to be altered from their default values together because they both control the randomness of the generated text, and changing both at the same time can result in conflicting effects; n and best_of are rarely tuned together because if the application can process multiple outputs, filtering on the server side causes unnecessary information loss; both n and max_tokens will affect the total number of tokens generated, which in turn will affect the cost of the request.
|
||||
These interactions and trade-offs make it difficult to manually determine the optimal hyperparameter settings for a given text generation task.
|
||||
|
||||
*Do the choices matter? Check this [blogpost](/blog/2023/04/21/LLM-tuning-math) to find example tuning results about gpt-3.5-turbo and gpt-4.*
|
||||
|
||||
|
||||
With `flaml.autogen`, the tuning can be performed with the following information:
|
||||
1. Validation data.
|
||||
1. Evaluation function.
|
||||
1. Metric to optimize.
|
||||
1. Search space.
|
||||
1. Budgets: inference and optimization respectively.
|
||||
|
||||
#### Validation data
|
||||
|
||||
Collect a diverse set of instances. They can be stored in an iterable of dicts. For example, each instance dict can contain "problem" as a key and the description str of a math problem as the value; and "solution" as a key and the solution str as the value.
|
||||
|
||||
#### Evaluation function
|
||||
|
||||
The evaluation function should take a list of responses, and other keyword arguments corresponding to the keys in each validation data instance as input, and output a dict of metrics. For example,
|
||||
|
||||
```python
|
||||
def eval_math_responses(responses: List[str], solution: str, **args) -> Dict:
|
||||
# select a response from the list of responses
|
||||
answer = voted_answer(responses)
|
||||
# check whether the answer is correct
|
||||
return {"success": is_equivalent(answer, solution)}
|
||||
```
|
||||
|
||||
[`flaml.autogen.code_utils`](/docs/reference/autogen/code_utils) and [`flaml.autogen.math_utils`](/docs/reference/autogen/math_utils) offer some example evaluation functions for code generation and math problem solving.
|
||||
|
||||
#### Metric to optimize
|
||||
|
||||
The metric to optimize is usually an aggregated metric over all the tuning data instances. For example, users can specify "success" as the metric and "max" as the optimization mode. By default, the aggregation function is taking the average. Users can provide a customized aggregation function if needed.
|
||||
|
||||
#### Search space
|
||||
|
||||
Users can specify the (optional) search range for each hyperparameter.
|
||||
|
||||
1. model. Either a constant str, or multiple choices specified by `flaml.tune.choice`.
|
||||
1. prompt/messages. Prompt is either a str or a list of strs, of the prompt templates. messages is a list of dicts or a list of lists, of the message templates.
|
||||
Each prompt/message template will be formatted with each data instance. For example, the prompt template can be:
|
||||
"{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}."
|
||||
And `{problem}` will be replaced by the "problem" field of each data instance.
|
||||
1. max_tokens, n, best_of. They can be constants, or specified by `flaml.tune.randint`, `flaml.tune.qrandint`, `flaml.tune.lograndint` or `flaml.qlograndint`. By default, max_tokens is searched in [50, 1000); n is searched in [1, 100); and best_of is fixed to 1.
|
||||
1. stop. It can be a str or a list of strs, or a list of lists of strs or None. Default is None.
|
||||
1. temperature or top_p. One of them can be specified as a constant or by `flaml.tune.uniform` or `flaml.tune.loguniform` etc.
|
||||
Please don't provide both. By default, each configuration will choose either a temperature or a top_p in [0, 1] uniformly.
|
||||
1. presence_penalty, frequency_penalty. They can be constants or specified by `flaml.tune.uniform` etc. Not tuned by default.
|
||||
|
||||
#### Budgets
|
||||
|
||||
One can specify an inference budget and an optimization budget.
|
||||
The inference budget refers to the average inference cost per data instance.
|
||||
The optimization budget refers to the total budget allowed in the tuning process. Both are measured by dollars and follow the price per 1000 tokens.
|
||||
|
||||
#### Perform tuning
|
||||
|
||||
Now, you can use [`flaml.autogen.Completion.tune`](/docs/reference/autogen/oai/completion#tune) for tuning. For example,
|
||||
|
||||
```python
|
||||
from flaml import autogen
|
||||
|
||||
config, analysis = autogen.Completion.tune(
|
||||
data=tune_data,
|
||||
metric="success",
|
||||
mode="max",
|
||||
eval_func=eval_func,
|
||||
inference_budget=0.05,
|
||||
optimization_budget=3,
|
||||
num_samples=-1,
|
||||
)
|
||||
```
|
||||
|
||||
`num_samples` is the number of configurations to sample. -1 means unlimited (until optimization budget is exhausted).
|
||||
The returned `config` contains the optimized configuration and `analysis` contains an [ExperimentAnalysis](/docs/reference/tune/analysis#experimentanalysis-objects) object for all the tried configurations and results.
|
||||
|
||||
The tuend config can be used to perform inference.
|
||||
|
||||
### API unification
|
||||
|
||||
`flaml.autogen.Completion.create` is compatible with both `openai.Completion.create` and `openai.ChatCompletion.create`, and both OpenAI API and Azure OpenAI API. So models such as "text-davinci-003", "gpt-3.5-turbo" and "gpt-4" can share a common API.
|
||||
When chat models are used and `prompt` is given as the input to `flaml.autogen.Completion.create`, the prompt will be automatically converted into `messages` to fit the chat completion API requirement. One advantage is that one can experiment with both chat and non-chat models for the same prompt in a unified API.
|
||||
|
||||
For local LLMs, one can spin up an endpoint using a package like [simple_ai_server](https://github.com/lhenault/simpleAI) and [FastChat](https://github.com/lm-sys/FastChat), and then use the same API to send a request. See [here](/blog/2023/07/14/Local-LLMs) for examples on how to make inference with local LLMs.
|
||||
|
||||
When only working with the chat-based models, `flaml.autogen.ChatCompletion` can be used. It also does automatic conversion from prompt to messages, if prompt is provided instead of messages.
|
||||
|
||||
### Caching
|
||||
|
||||
API call results are cached locally and reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving. It still allows controlled randomness by setting the "seed", using [`set_cache`](/docs/reference/autogen/oai/completion#set_cache) or specifying in `create()`.
|
||||
|
||||
### Error handling
|
||||
|
||||
#### Runtime error
|
||||
|
||||
It is easy to hit error when calling OpenAI APIs, due to connection, rate limit, or timeout. Some of the errors are transient. `flaml.autogen.Completion.create` deals with the transient errors and retries automatically. Initial request timeout, retry timeout and retry time interval can be configured via `request_timeout`, `retry_timeout` and `flaml.autogen.Completion.retry_time`.
|
||||
|
||||
Moreover, one can pass a list of configurations of different models/endpoints to mitigate the rate limits. For example,
|
||||
|
||||
```python
|
||||
response = autogen.Completion.create(
|
||||
config_list=[
|
||||
{
|
||||
"model": "gpt-4",
|
||||
"api_key": os.environ.get("AZURE_OPENAI_API_KEY"),
|
||||
"api_type": "azure",
|
||||
"api_base": os.environ.get("AZURE_OPENAI_API_BASE"),
|
||||
"api_version": "2023-06-01-preview",
|
||||
},
|
||||
{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"api_key": os.environ.get("OPENAI_API_KEY"),
|
||||
"api_type": "open_ai",
|
||||
"api_base": "https://api.openai.com/v1",
|
||||
"api_version": None,
|
||||
},
|
||||
{
|
||||
"model": "llama-7B",
|
||||
"api_base": "http://127.0.0.1:8080",
|
||||
"api_type": "open_ai",
|
||||
"api_version": None,
|
||||
}
|
||||
],
|
||||
prompt="Hi",
|
||||
)
|
||||
```
|
||||
|
||||
It will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, and a locally hosted llama-7B one by one, ignoring AuthenticationError, RateLimitError and Timeout,
|
||||
until a valid result is returned. This can speed up the development process where the rate limit is a bottleneck. An error will be raised if the last choice fails. So make sure the last choice in the list has the best availability.
|
||||
|
||||
#### Logic error
|
||||
|
||||
Another type of error is that the returned response does not satisfy a requirement. For example, if the response is required to be a valid json string, one would like to filter the responses that are not. This can be achieved by providing a list of configurations and a filter function. For example,
|
||||
|
||||
```python
|
||||
def valid_json_filter(context, config, response):
|
||||
for text in autogen.Completion.extract_text(response):
|
||||
try:
|
||||
json.loads(text)
|
||||
return True
|
||||
except ValueError:
|
||||
pass
|
||||
return False
|
||||
|
||||
response = autogen.Completion.create(
|
||||
config_list=[{"model": "text-ada-001"}, {"model": "gpt-3.5-turbo"}, {"model": "text-davinci-003"}],
|
||||
prompt="How to construct a json request to Bing API to search for 'latest AI news'? Return the JSON request.",
|
||||
filter_func=valid_json_filter,
|
||||
)
|
||||
```
|
||||
|
||||
The example above will try to use text-ada-001, gpt-3.5-turbo, and text-davinci-003 iteratively, until a valid json string is returned or the last config is used. One can also repeat the same model in the list for multiple times to try one model multiple times for increasing the robustness of the final response.
|
||||
|
||||
*Advanced use case: Check this [blogpost](/blog/2023/05/18/GPT-adaptive-humaneval) to find how to improve GPT-4's coding performance from 68% to 90% while reducing the inference cost.*
|
||||
|
||||
### Templating
|
||||
|
||||
If the provided prompt or message is a template, it will be automatically materialized with a given context. For example,
|
||||
|
||||
```python
|
||||
response = autogen.Completion.create(
|
||||
context={"problem": "How many positive integers, not exceeding 100, are multiples of 2 or 3 but not 4?"},
|
||||
prompt="{problem} Solve the problem carefully.",
|
||||
allow_format_str_template=True,
|
||||
**config
|
||||
)
|
||||
```
|
||||
|
||||
A template is either a format str, like the example above, or a function which produces a str from several input fields, like the example below.
|
||||
|
||||
```python
|
||||
def content(turn, context):
|
||||
return "\n".join(
|
||||
[
|
||||
context[f"user_message_{turn}"],
|
||||
context[f"external_info_{turn}"]
|
||||
]
|
||||
)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a teaching assistant of math.",
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": partial(content, turn=0),
|
||||
},
|
||||
]
|
||||
context = {
|
||||
"user_message_0": "Could you explain the solution to Problem 1?",
|
||||
"external_info_0": "Problem 1: ...",
|
||||
}
|
||||
|
||||
response = autogen.ChatCompletion.create(context, messages=messages, **config)
|
||||
messages.append(
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": autogen.ChatCompletion.extract_text(response)[0]
|
||||
}
|
||||
)
|
||||
messages.append(
|
||||
{
|
||||
"role": "user",
|
||||
"content": partial(content, turn=1),
|
||||
},
|
||||
)
|
||||
context.append(
|
||||
{
|
||||
"user_message_1": "Why can't we apply Theorem 1 to Equation (2)?",
|
||||
"external_info_1": "Theorem 1: ...",
|
||||
}
|
||||
)
|
||||
response = autogen.ChatCompletion.create(context, messages=messages, **config)
|
||||
```
|
||||
|
||||
### Logging (Experimental)
|
||||
|
||||
When debugging or diagnosing an LLM-based system, it is often convenient to log the API calls and analyze them. `flaml.autogen.Completion` and `flaml.autogen.ChatCompletion` offer an easy way to collect the API call histories. For example, to log the chat histories, simply run:
|
||||
```python
|
||||
flaml.autogen.ChatCompletion.start_logging()
|
||||
```
|
||||
The API calls made after this will be automatically logged. They can be retrieved at any time by:
|
||||
```python
|
||||
flaml.autogen.ChatCompletion.logged_history
|
||||
```
|
||||
To stop logging, use
|
||||
```python
|
||||
flaml.autogen.ChatCompletion.stop_logging()
|
||||
```
|
||||
If one would like to append the history to an existing dict, pass the dict like:
|
||||
```python
|
||||
flaml.autogen.ChatCompletion.start_logging(history_dict=existing_history_dict)
|
||||
```
|
||||
By default, the counter of API calls will be reset at `start_logging()`. If no reset is desired, set `reset_counter=False`.
|
||||
|
||||
There are two types of logging formats: compact logging and individual API call logging. The default format is compact.
|
||||
Set `compact=False` in `start_logging()` to switch.
|
||||
|
||||
* Example of a history dict with compact logging.
|
||||
```python
|
||||
{
|
||||
"""
|
||||
[
|
||||
{
|
||||
'role': 'system',
|
||||
'content': system_message,
|
||||
},
|
||||
{
|
||||
'role': 'user',
|
||||
'content': user_message_1,
|
||||
},
|
||||
{
|
||||
'role': 'assistant',
|
||||
'content': assistant_message_1,
|
||||
},
|
||||
{
|
||||
'role': 'user',
|
||||
'content': user_message_2,
|
||||
},
|
||||
{
|
||||
'role': 'assistant',
|
||||
'content': assistant_message_2,
|
||||
},
|
||||
]""": {
|
||||
"created_at": [0, 1],
|
||||
"cost": [0.1, 0.2],
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Example of a history dict with individual API call logging.
|
||||
```python
|
||||
{
|
||||
0: {
|
||||
"request": {
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": system_message,
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": user_message_1,
|
||||
}
|
||||
],
|
||||
... # other parameters in the request
|
||||
},
|
||||
"response": {
|
||||
"choices": [
|
||||
"messages": {
|
||||
"role": "assistant",
|
||||
"content": assistant_message_1,
|
||||
},
|
||||
],
|
||||
... # other fields in the response
|
||||
}
|
||||
},
|
||||
1: {
|
||||
"request": {
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": system_message,
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": user_message_1,
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": assistant_message_1,
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": user_message_2,
|
||||
},
|
||||
],
|
||||
... # other parameters in the request
|
||||
},
|
||||
"response": {
|
||||
"choices": [
|
||||
"messages": {
|
||||
"role": "assistant",
|
||||
"content": assistant_message_2,
|
||||
},
|
||||
],
|
||||
... # other fields in the response
|
||||
}
|
||||
},
|
||||
}
|
||||
```
|
||||
It can be seen that the individual API call history contains redundant information of the conversation. For a long conversation the degree of redundancy is high.
|
||||
The compact history is more efficient and the individual API call history contains more details.
|
||||
|
||||
### Other Utilities
|
||||
|
||||
- a [`cost`](/docs/reference/autogen/oai/completion#cost) function to calculate the cost of an API call.
|
||||
- a [`test`](/docs/reference/autogen/oai/completion#test) function to conveniently evaluate the configuration over test data.
|
||||
- an [`extract_text_or_function_call`](/docs/reference/autogen/oai/completion#extract_text_or_function_call) function to extract the text or function call from a completion or chat response.
|
||||
|
||||
|
||||
## Utilities for Applications
|
||||
|
||||
### Code
|
||||
|
||||
[`flaml.autogen.code_utils`](/docs/reference/autogen/code_utils) offers code-related utilities, such as:
|
||||
- a [`improve_code`](/docs/reference/autogen/code_utils#improve_code) function to improve code for a given objective.
|
||||
- a [`generate_assertions`](/docs/reference/autogen/code_utils#generate_assertions) function to generate assertion statements from function signature and docstr.
|
||||
- a [`implement`](/docs/reference/autogen/code_utils#implement) function to implement a function from a definition.
|
||||
- a [`eval_function_completions`](/docs/reference/autogen/code_utils#eval_function_completions) function to evaluate the success of a function completion task, or select a response from a list of responses using generated assertions.
|
||||
|
||||
### Math
|
||||
|
||||
[`flaml.autogen.math_utils`](/docs/reference/autogen/math_utils) offers utilities for math problems, such as:
|
||||
- a [eval_math_responses](/docs/reference/autogen/math_utils#eval_math_responses) function to select a response using voting, and check if the final answer is correct if the canonical solution is provided.
|
||||
|
||||
## For Further Reading
|
||||
|
||||
*Interested in the research that leads to this package? Please check the following papers.*
|
||||
* [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673). Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah. ArXiv preprint arXiv:2303.04673 (2023).
|
||||
* [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337). Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2306.01337 (2023).
|
After Width: | Height: | Size: 192 KiB |
|
@ -0,0 +1,115 @@
|
|||
/** @type {import('@docusaurus/types').DocusaurusConfig} */
|
||||
const math = require('remark-math');
|
||||
const katex = require('rehype-katex');
|
||||
|
||||
module.exports = {
|
||||
title: 'AutoGen',
|
||||
tagline: 'TODO',
|
||||
url: 'https://microsoft.github.io/',
|
||||
baseUrl: '/AutoGen/',
|
||||
onBrokenLinks: 'throw',
|
||||
onBrokenMarkdownLinks: 'warn',
|
||||
favicon: 'img/flaml_logo.ico',
|
||||
organizationName: 'Microsoft', // Usually your GitHub org/user name.
|
||||
projectName: 'AutoGen', // Usually your repo name.
|
||||
themeConfig: {
|
||||
navbar: {
|
||||
title: 'AutoGen',
|
||||
logo: {
|
||||
alt: 'AutoGen',
|
||||
src: 'img/flaml_logo_fill.svg',
|
||||
},
|
||||
items: [
|
||||
{
|
||||
type: 'doc',
|
||||
docId: 'Getting-Started',
|
||||
position: 'left',
|
||||
label: 'Docs',
|
||||
},
|
||||
{to: 'blog', label: 'Blog', position: 'left'},
|
||||
{
|
||||
type: 'doc',
|
||||
docId: 'FAQ',
|
||||
position: 'left',
|
||||
label: 'FAQ',
|
||||
},
|
||||
{
|
||||
href: 'https://github.com/microsoft/AutoGen',
|
||||
label: 'GitHub',
|
||||
position: 'right',
|
||||
},
|
||||
],
|
||||
},
|
||||
footer: {
|
||||
style: 'dark',
|
||||
links: [
|
||||
// {
|
||||
// title: 'Docs',
|
||||
// items: [
|
||||
// {
|
||||
// label: 'Getting Started',
|
||||
// to: 'docs/getting-started',
|
||||
// },
|
||||
// ],
|
||||
// },
|
||||
{
|
||||
title: 'Community',
|
||||
items: [
|
||||
// // {
|
||||
// // label: 'Stack Overflow',
|
||||
// // href: 'https://stackoverflow.com/questions/tagged/pymarlin',
|
||||
// // },
|
||||
{
|
||||
label: 'Discord',
|
||||
href: 'https://discord.gg/Cppx2vSPVP',
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
copyright: `Copyright © ${new Date().getFullYear()} AutoGen Authors. Built with Docusaurus.`,
|
||||
},
|
||||
},
|
||||
presets: [
|
||||
[
|
||||
'@docusaurus/preset-classic',
|
||||
{
|
||||
docs: {
|
||||
sidebarPath: require.resolve('./sidebars.js'),
|
||||
// Please change this to your repo.
|
||||
editUrl:
|
||||
'https://github.com/microsoft/autogen/edit/main/website/',
|
||||
remarkPlugins: [math],
|
||||
rehypePlugins: [katex],
|
||||
},
|
||||
theme: {
|
||||
customCss: require.resolve('./src/css/custom.css'),
|
||||
},
|
||||
},
|
||||
],
|
||||
],
|
||||
stylesheets: [
|
||||
{
|
||||
href: "https://cdn.jsdelivr.net/npm/katex@0.13.11/dist/katex.min.css",
|
||||
integrity: "sha384-Um5gpz1odJg5Z4HAmzPtgZKdTBHZdw8S29IecapCSB31ligYPhHQZMIlWLYQGVoc",
|
||||
crossorigin: "anonymous",
|
||||
},
|
||||
],
|
||||
|
||||
plugins: [
|
||||
// ... Your other plugins.
|
||||
[
|
||||
require.resolve("@easyops-cn/docusaurus-search-local"),
|
||||
{
|
||||
// ... Your options.
|
||||
// `hashed` is recommended as long-term-cache of index file is possible.
|
||||
hashed: true,
|
||||
blogDir:"./blog/"
|
||||
// For Docs using Chinese, The `language` is recommended to set to:
|
||||
// ```
|
||||
// language: ["en", "zh"],
|
||||
// ```
|
||||
// When applying `zh` in language, please install `nodejieba` in your project.
|
||||
},
|
||||
],
|
||||
],
|
||||
};
|
|
@ -0,0 +1,56 @@
|
|||
{
|
||||
"name": "website",
|
||||
"version": "0.0.0",
|
||||
"private": true,
|
||||
"resolutions" :{
|
||||
"nth-check":"2.0.1",
|
||||
"trim":"0.0.3",
|
||||
"got": "11.8.5",
|
||||
"node-forge": "1.3.0",
|
||||
"minimatch": "3.0.5",
|
||||
"loader-utils": "2.0.4",
|
||||
"eta": "2.0.0",
|
||||
"@sideway/formula": "3.0.1",
|
||||
"http-cache-semantics": "4.1.1"
|
||||
},
|
||||
"scripts": {
|
||||
"docusaurus": "docusaurus",
|
||||
"start": "docusaurus start",
|
||||
"build": "docusaurus build",
|
||||
"swizzle": "docusaurus swizzle",
|
||||
"deploy": "docusaurus deploy",
|
||||
"clear": "docusaurus clear",
|
||||
"serve": "docusaurus serve",
|
||||
"write-translations": "docusaurus write-translations",
|
||||
"write-heading-ids": "docusaurus write-heading-ids"
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "0.0.0-4193",
|
||||
"@docusaurus/preset-classic": "0.0.0-4193",
|
||||
"@easyops-cn/docusaurus-search-local": "^0.21.1",
|
||||
"@mdx-js/react": "^1.6.21",
|
||||
"@svgr/webpack": "^5.5.0",
|
||||
"clsx": "^1.1.1",
|
||||
"file-loader": "^6.2.0",
|
||||
"hast-util-is-element": "1.1.0",
|
||||
"react": "^17.0.1",
|
||||
"react-dom": "^17.0.1",
|
||||
"rehype-katex": "4",
|
||||
"remark-math": "3",
|
||||
"trim": "^0.0.3",
|
||||
"url-loader": "^4.1.1",
|
||||
"minimatch": "3.0.5"
|
||||
},
|
||||
"browserslist": {
|
||||
"production": [
|
||||
">0.5%",
|
||||
"not dead",
|
||||
"not op_mini all"
|
||||
],
|
||||
"development": [
|
||||
"last 1 chrome version",
|
||||
"last 1 firefox version",
|
||||
"last 1 safari version"
|
||||
]
|
||||
}
|
||||
}
|
|
@ -0,0 +1,21 @@
|
|||
/**
|
||||
* Creating a sidebar enables you to:
|
||||
- create an ordered group of docs
|
||||
- render a sidebar for each doc of that group
|
||||
- provide next/previous navigation
|
||||
|
||||
The sidebars can be generated from the filesystem, or explicitly defined here.
|
||||
|
||||
Create as many sidebars as you want.
|
||||
*/
|
||||
|
||||
module.exports = {
|
||||
docsSidebar: [
|
||||
'Getting-Started',
|
||||
'Installation',
|
||||
{'Use Cases': [{type: 'autogenerated', dirName: 'Use-Cases'}]},
|
||||
{'Examples': [{type: 'autogenerated', dirName: 'Examples'}]},
|
||||
'Contribute',
|
||||
'Research',
|
||||
],
|
||||
};
|
|
@ -0,0 +1,72 @@
|
|||
import React from 'react';
|
||||
import clsx from 'clsx';
|
||||
import styles from './HomepageFeatures.module.css';
|
||||
|
||||
const FeatureList = [
|
||||
{
|
||||
title: 'TODO',
|
||||
Svg: require('../../static/img/auto.svg').default,
|
||||
description: (
|
||||
<>
|
||||
TODO
|
||||
</>
|
||||
),
|
||||
},
|
||||
{
|
||||
title: 'TODO',
|
||||
Svg: require('../../static/img/extend.svg').default,
|
||||
description: (
|
||||
<>
|
||||
TODO
|
||||
</>
|
||||
),
|
||||
},
|
||||
// {
|
||||
// title: 'Easy to Customize or Extend',
|
||||
// Svg: require('../../static/img/extend.svg').default,
|
||||
// description: (
|
||||
// <>
|
||||
// FLAML is designed easy to extend, such as adding custom learners or metrics.
|
||||
// The customization level ranges smoothly from minimal
|
||||
// (training data and task type as only input) to full (tuning a user-defined function).
|
||||
// </>
|
||||
// ),
|
||||
// },
|
||||
{
|
||||
title: 'TODO',
|
||||
Svg: require('../../static/img/fast.svg').default,
|
||||
description: (
|
||||
<>
|
||||
TODO
|
||||
</>
|
||||
),
|
||||
},
|
||||
];
|
||||
|
||||
function Feature({Svg, title, description}) {
|
||||
return (
|
||||
<div className={clsx('col col--4')}>
|
||||
<div className="text--center">
|
||||
<Svg className={styles.featureSvg} alt={title} />
|
||||
</div>
|
||||
<div className="text--center padding-horiz--md">
|
||||
<h3>{title}</h3>
|
||||
<p>{description}</p>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function HomepageFeatures() {
|
||||
return (
|
||||
<section className={styles.features}>
|
||||
<div className="container">
|
||||
<div className="row">
|
||||
{FeatureList.map((props, idx) => (
|
||||
<Feature key={idx} {...props} />
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
);
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
/* stylelint-disable docusaurus/copyright-header */
|
||||
|
||||
.features {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 2rem 0;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.featureSvg {
|
||||
height: 120px;
|
||||
width: 200px;
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
:root {
|
||||
--ifm-font-size-base: 17px;
|
||||
--ifm-code-font-size: 90%;
|
||||
|
||||
--ifm-color-primary: #0c4da2;
|
||||
--ifm-color-primary-dark: rgb(11, 69, 146);
|
||||
--ifm-color-primary-darker: #0a418a;
|
||||
--ifm-color-primary-darkest: #083671;
|
||||
--ifm-color-primary-light: #0d55b2;
|
||||
--ifm-color-primary-lighter: #0e59ba;
|
||||
--ifm-color-primary-lightest: #1064d3;
|
||||
|
||||
--ifm-color-emphasis-300: #1064d3;
|
||||
--ifm-link-color: #1064d3;
|
||||
--ifm-menu-color-active: #1064d3;
|
||||
}
|
||||
|
||||
.docusaurus-highlight-code-line {
|
||||
background-color: rgba(0, 0, 0, 0.1);
|
||||
display: block;
|
||||
margin: 0 calc(-1 * var(--ifm-pre-padding));
|
||||
padding: 0 var(--ifm-pre-padding);
|
||||
}
|
||||
html[data-theme='dark'] .docusaurus-highlight-code-line {
|
||||
background-color: rgb(0, 0, 0, 0.3);
|
||||
}
|
||||
|
||||
.admonition-content a {
|
||||
text-decoration: underline;
|
||||
font-weight: 600;
|
||||
color: inherit;
|
||||
}
|
||||
|
||||
a {
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
/* samsung blue with lots of transparency */
|
||||
background-color: #0c4da224;
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root {
|
||||
--ifm-hero-text-color: white;
|
||||
}
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.hero.hero--primary { --ifm-hero-text-color: white;}
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: dark) {
|
||||
blockquote {
|
||||
--ifm-color-emphasis-300: var(--ifm-color-primary);
|
||||
/* border-left: 6px solid var(--ifm-color-emphasis-300); */
|
||||
}
|
||||
}
|
||||
@media (prefers-color-scheme: dark) {
|
||||
code {
|
||||
/* background-color: rgb(41, 45, 62); */
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* Docusaurus still defaults to their green! */
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.react-toggle-thumb {
|
||||
border-color: var(--ifm-color-primary) !important;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
.header-github-link:hover {
|
||||
opacity: 0.6;
|
||||
}
|
||||
|
||||
.header-github-link:before {
|
||||
content: '';
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
display: flex;
|
||||
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
|
||||
no-repeat;
|
||||
}
|
||||
|
||||
html[data-theme='dark'] .header-github-link:before {
|
||||
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill='white' d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
|
||||
no-repeat;
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
import React from 'react';
|
||||
import clsx from 'clsx';
|
||||
import Layout from '@theme/Layout';
|
||||
import Link from '@docusaurus/Link';
|
||||
import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
|
||||
import styles from './index.module.css';
|
||||
import HomepageFeatures from '../components/HomepageFeatures';
|
||||
|
||||
function HomepageHeader() {
|
||||
const {siteConfig} = useDocusaurusContext();
|
||||
return (
|
||||
<header className={clsx('hero hero--primary', styles.heroBanner)}>
|
||||
<div className="container">
|
||||
<h1 className="hero__title">{siteConfig.title}</h1>
|
||||
<p className="hero__subtitle">{siteConfig.tagline}</p>
|
||||
<div className={styles.buttons}>
|
||||
<Link
|
||||
className="button button--secondary button--lg"
|
||||
to="/docs/getting-started">
|
||||
TODO ⏱️
|
||||
</Link>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
);
|
||||
}
|
||||
|
||||
export default function Home() {
|
||||
const {siteConfig} = useDocusaurusContext();
|
||||
return (
|
||||
<Layout
|
||||
title={`AutoML & Tuning`}
|
||||
description="A Fast Library for Automated Machine Learning and Tuning">
|
||||
<HomepageHeader />
|
||||
<main>
|
||||
<HomepageFeatures />
|
||||
</main>
|
||||
</Layout>
|
||||
);
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
/* stylelint-disable docusaurus/copyright-header */
|
||||
|
||||
/**
|
||||
* CSS files with the .module.css suffix will be treated as CSS modules
|
||||
* and scoped locally.
|
||||
*/
|
||||
|
||||
.heroBanner {
|
||||
padding: 4rem 0;
|
||||
text-align: center;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
@media screen and (max-width: 966px) {
|
||||
.heroBanner {
|
||||
padding: 2rem;
|
||||
}
|
||||
}
|
||||
|
||||
.buttons {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
<svg width="556" height="557" viewBox="0 0 556 557" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="68" y="81" width="556" height="557"/></clipPath><clipPath id="clip1"><rect x="68" y="82" width="555" height="556"/></clipPath><clipPath id="clip2"><rect x="68" y="82" width="555" height="556"/></clipPath><clipPath id="clip3"><rect x="68" y="82" width="555" height="556"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-68 -81)"><g clip-path="url(#clip1)"><g clip-path="url(#clip2)"><g clip-path="url(#clip3)"><path d="M185 184.347 185 112.867C185 88.8439 204.475 69.3692 228.498 69.3692 252.522 69.3692 271.996 88.8439 271.996 112.867L271.996 184.168C311.523 160.063 324.025 108.479 299.92 68.9524 275.815 29.4254 224.232 16.9235 184.705 41.0284 145.178 65.1333 132.676 116.717 156.781 156.243 163.793 167.742 173.473 177.382 185 184.347Z" stroke="#000000" stroke-width="8" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="4" stroke-opacity="1" fill="#FFFFFF" fill-rule="nonzero" fill-opacity="1" transform="matrix(1 0 0 1.0018 68 82)"/><path d="M202.344 112.873 202.344 359.68 202.28 359.68 180.82 263.047C177.513 248.986 163.433 240.269 149.372 243.576 135.558 246.825 126.855 260.499 129.76 274.39L161.147 415.62C162.481 421.599 165.874 426.919 170.732 430.651L228.544 475.109 228.544 514.531 380.21 514.531 380.21 488.377C380.21 451.325 422.054 448.486 422.054 378.533L422.054 284.38C422.072 267.068 408.052 253.019 390.74 253.002 382.921 252.994 375.382 255.909 369.601 261.174 369.688 260.202 369.746 259.225 369.746 258.231 369.757 240.91 355.723 226.859 338.402 226.848 330.29 226.844 322.492 229.981 316.645 235.603 312.833 218.704 296.042 208.094 279.142 211.906 264.823 215.136 254.659 227.863 254.676 242.541L254.676 112.873C254.676 98.4287 242.966 86.7188 228.521 86.7188 214.077 86.7188 202.367 98.4287 202.367 112.873Z" stroke="#000000" stroke-width="8" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="4" stroke-opacity="1" fill="#FFFFFF" fill-rule="nonzero" fill-opacity="1" transform="matrix(1 0 0 1.0018 68 82)"/></g></g></g></g></svg>
|
After Width: | Height: | Size: 2.1 KiB |
|
@ -0,0 +1 @@
|
|||
<svg width="557" height="557" viewBox="0 0 557 557" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="160" y="92" width="557" height="557"/></clipPath><clipPath id="clip1"><rect x="161" y="93" width="556" height="556"/></clipPath><clipPath id="clip2"><rect x="161" y="93" width="556" height="556"/></clipPath><clipPath id="clip3"><rect x="161" y="93" width="556" height="556"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-160 -92)"><g clip-path="url(#clip1)"><g clip-path="url(#clip2)"><g clip-path="url(#clip3)"><path d="M504.446 309.029C480.121 309.029 461.008 289.337 461.008 265.592 461.008 241.846 480.7 222.154 504.446 222.154 528.771 222.154 547.883 241.846 547.883 265.592 547.883 289.337 528.192 309.029 504.446 309.029ZM602.325 238.371C600.008 230.262 597.112 222.733 593.058 215.783L602.325 188.562 581.475 167.712 554.254 176.979C547.304 172.925 539.775 170.029 531.667 167.712L518.925 142.229 489.967 142.229 477.225 167.712C469.117 170.029 461.587 172.925 454.637 176.979L427.417 167.712 406.567 188.562 415.833 215.783C411.779 222.733 408.883 230.262 406.567 238.371L381.083 251.112 381.083 280.071 406.567 292.812C408.883 300.921 411.779 308.45 415.833 315.4L406.567 342.621 426.837 362.892 454.058 353.625C461.008 357.679 468.537 360.575 476.646 362.892L489.387 388.375 518.346 388.375 531.088 362.892C539.196 360.575 546.725 357.679 553.675 353.625L580.896 362.892 601.746 342.621 592.479 315.4C596.533 308.45 600.008 300.342 602.325 292.812L627.808 280.071 627.808 251.112 602.325 238.371Z" stroke="#000000" stroke-width="8.01441" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="4" stroke-opacity="1" fill="#FFFFFF" fill-rule="nonzero" fill-opacity="1"/><path d="M373.554 519.846C349.229 519.846 330.117 500.154 330.117 476.408 330.117 452.083 349.808 432.971 373.554 432.971 397.879 432.971 416.992 452.662 416.992 476.408 416.992 500.154 397.879 519.846 373.554 519.846L373.554 519.846ZM462.167 426.6 471.433 399.379 450.583 378.529 423.362 387.796C416.412 383.742 408.304 380.846 400.775 378.529L388.033 353.046 359.075 353.046 346.333 378.529C338.225 380.846 330.696 383.742 323.746 387.796L296.525 378.529 276.254 398.8 284.942 426.021C280.888 432.971 277.992 441.079 275.675 448.608L250.192 461.35 250.192 490.308 275.675 503.05C277.992 511.158 280.888 518.688 284.942 525.637L276.254 552.858 296.525 573.129 323.746 564.442C330.696 568.496 338.225 571.392 346.333 573.708L359.075 599.192 388.033 599.192 400.775 573.708C408.883 571.392 416.412 568.496 423.362 564.442L450.583 573.708 470.854 552.858 462.167 526.217C466.221 519.267 469.117 511.738 471.433 503.629L496.917 490.887 496.917 461.929 471.433 449.188C469.117 441.079 466.221 433.55 462.167 426.6Z" stroke="#000000" stroke-width="8.01441" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="4" stroke-opacity="1" fill="#FFFFFF" fill-rule="nonzero" fill-opacity="1"/></g></g></g></g></svg>
|
After Width: | Height: | Size: 2.9 KiB |
|
@ -0,0 +1 @@
|
|||
<svg width="200" height="235" viewBox="0 0 200 235" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" overflow="hidden"><defs><clipPath id="clip0"><rect x="948" y="165" width="200" height="235"/></clipPath><clipPath id="clip1"><rect x="948" y="166" width="200" height="234"/></clipPath><clipPath id="clip2"><rect x="948" y="166" width="200" height="234"/></clipPath><clipPath id="clip3"><rect x="948" y="166" width="200" height="234"/></clipPath></defs><g clip-path="url(#clip0)" transform="translate(-948 -165)"><g clip-path="url(#clip1)"><g clip-path="url(#clip2)"><g clip-path="url(#clip3)"><path d="M70.8333 185.417 93.75 108.333 58.3333 108.333 75.875 15.0833 128.292 15.0833 106.25 83.3333 141.667 83.3333 70.8333 185.417Z" stroke="#000000" stroke-width="3" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="4" stroke-opacity="1" fill="#FFFFFF" fill-rule="nonzero" fill-opacity="1" transform="matrix(1 0 0 1.17 948 166)"/></g></g></g></g></svg>
|
After Width: | Height: | Size: 994 B |
|
@ -0,0 +1 @@
|
|||
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1138.16 450.71"><defs><style>.cls-1{fill:#ff9406;}.cls-2{fill:#505d66;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="图层_1" data-name="图层 1"><path class="cls-1" d="M298,76.7,173.81.24a1.58,1.58,0,0,0-2.06,2.35L211.55,51a263.57,263.57,0,0,0-33.37,3.9,252.77,252.77,0,0,0-35,9,240.65,240.65,0,0,0-33,13.92,228.41,228.41,0,0,0-30.5,18.8,211.86,211.86,0,0,0-29,25.52,191.34,191.34,0,0,0-23,29.72,176.8,176.8,0,0,0-16.34,33.49,172.22,172.22,0,0,0-8.9,36.76L0,241a1.58,1.58,0,0,0,1.37,1.77,1.6,1.6,0,0,0,1-.22l79-47.9a1.55,1.55,0,0,0,.69-.86l1-3.16a145,145,0,0,1,26.41-47.86,170.28,170.28,0,0,1,41.5-36A196.9,196.9,0,0,1,203.12,84.1a214.83,214.83,0,0,1,59.32-7.55c2.86,0,5.77.12,8.63.27s5.76.34,8.62.6,5.75.57,8.6.93,5.71.78,8.54,1.25a1.58,1.58,0,0,0,1.91-1.16A1.59,1.59,0,0,0,298,76.7Z" transform="translate(0.01 0.01)"/><path class="cls-1" d="M347.83,177.83l-1-18.05a1.57,1.57,0,0,0-1.65-1.48,1.49,1.49,0,0,0-.79.26l-71.16,47.15a1.55,1.55,0,0,0-.67,1L271.9,210a143.76,143.76,0,0,1-22.58,52.27,174.42,174.42,0,0,1-42.61,43,205,205,0,0,1-58.31,28.88,217.42,217.42,0,0,1-68.28,9.93c-3.3-.05-6.63-.17-9.89-.36s-6.58-.47-9.83-.8-6.51-.76-9.73-1.24-6.42-1.05-9.6-1.68a1.57,1.57,0,0,0-1.3,2.76L171.15,450.34a1.57,1.57,0,0,0,2.39-1.94l-36.87-70.52a264,264,0,0,0,40.57-5.5A251.22,251.22,0,0,0,217.75,360a238,238,0,0,0,36.9-18.61,224.15,224.15,0,0,0,32.27-24.25,201.9,201.9,0,0,0,28.2-31.37,179.69,179.69,0,0,0,19.59-34.43A167,167,0,0,0,345.6,215,161.86,161.86,0,0,0,347.83,177.83Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M258.56,209.79,196.9,181.24l61.42-95.48a1.63,1.63,0,0,0-2.23-2.26L101.25,179.84a1.63,1.63,0,0,0-.52,2.25,1.56,1.56,0,0,0,.67.6l60.26,29.12-90.33,122a1.62,1.62,0,0,0,.13,2.09,1.6,1.6,0,0,0,2.08.23l185.24-123.5a1.63,1.63,0,0,0,.46-2.26,1.67,1.67,0,0,0-.68-.58Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M451.86,199a36.63,36.63,0,0,0-12.3,10.44,32.45,32.45,0,0,0-6.35,14.49l-4.09,24.49h104.4l-4,24H425.12l-8.88,53.25H380.3l3-17.79,14-84a60.11,60.11,0,0,1,11.49-26.51,67.08,67.08,0,0,1,22.41-19.21,58.25,58.25,0,0,1,27.69-7.07H546.4l-4,24H466.76A31.6,31.6,0,0,0,451.86,199Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M772.68,325.65,742.8,208.39l-46.3,78,60.5,2.68L637.67,325.65,729,171.09h39.76l39.83,154.56Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M643.32,301.39H597.81q-11.91,0-18.91-8.43t-5-20.33l16.93-101.54H554.61L537.69,272.63q-2.4,14.39,2.65,26.51a41.67,41.67,0,0,0,16.11,19.32,45.49,45.49,0,0,0,25.42,7.19H629Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M975.05,170.86h36.17l-25.8,154.79H949.25L966,225l-37.44,45.37H892.87l-23.7-46.5-17,101.77H816l25.8-154.79H878l37.1,72.57Z" transform="translate(0.01 0.01)"/><path class="cls-2" d="M1138.15,301.39h-75.68q-11.91,0-18.92-8.43t-5-20.33l16.92-101.54H1019.3l-16.93,101.54a47.91,47.91,0,0,0,2.63,26.51,41.7,41.7,0,0,0,16.1,19.32,45.57,45.57,0,0,0,25.42,7.19h77.29Z" transform="translate(0.01 0.01)"/></g></g></svg>
|
After Width: | Height: | Size: 3.0 KiB |
After Width: | Height: | Size: 4.2 KiB |
|
@ -0,0 +1,28 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!-- Generator: Adobe Illustrator 26.0.2, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||
viewBox="0 0 348.1 450.7" style="enable-background:new 0 0 348.1 450.7;" xml:space="preserve">
|
||||
<style type="text/css">
|
||||
.st0{fill:#FE9807;}
|
||||
.st1{fill:#505D66;}
|
||||
</style>
|
||||
<g id="Layer_2_00000027592924314629136480000002286563024165190831_">
|
||||
<g id="图层_1">
|
||||
<path class="st0" d="M298,76.7L173.8,0.2c-0.7-0.5-1.7-0.3-2.2,0.5c-0.4,0.6-0.3,1.3,0.1,1.9L211.6,51
|
||||
c-11.2,0.6-22.3,1.9-33.4,3.9c-11.9,2.2-23.6,5.2-35,9c-11.3,3.8-22.4,8.5-33,13.9c-10.6,5.5-20.8,11.7-30.5,18.8
|
||||
c-10.4,7.6-20.1,16.1-29,25.5c-8.6,9.1-16.3,19.1-23,29.7c-6.6,10.6-12.1,21.8-16.3,33.5C7,197.2,4,209.6,2.4,222.1L0,241
|
||||
c-0.1,0.9,0.5,1.7,1.4,1.8c0.4,0,0.7,0,1-0.2l78.9-47.9c0.3-0.2,0.6-0.5,0.7-0.9l1-3.2c5.7-17.5,14.7-33.7,26.4-47.9
|
||||
c11.8-14.2,25.8-26.3,41.5-36c16.2-10,33.8-17.7,52.2-22.7c19.3-5.3,39.3-7.8,59.3-7.6c2.9,0,5.8,0.1,8.6,0.3s5.8,0.3,8.6,0.6
|
||||
s5.8,0.6,8.6,0.9s5.7,0.8,8.5,1.2c0.8,0.2,1.7-0.3,1.9-1.2C298.9,77.8,298.6,77.1,298,76.7z"/>
|
||||
<path class="st0" d="M347.8,177.8l-1-18.1c0-0.9-0.8-1.5-1.7-1.5c-0.3,0-0.6,0.1-0.8,0.3l-71.2,47.1c-0.3,0.2-0.6,0.6-0.7,1
|
||||
l-0.7,3.3c-4,18.8-11.6,36.5-22.6,52.3c-11.6,16.7-26,31.3-42.6,43c-17.8,12.7-37.5,22.4-58.3,28.9c-22.1,6.9-45.1,10.3-68.3,9.9
|
||||
c-3.3,0-6.6-0.2-9.9-0.4s-6.6-0.5-9.8-0.8s-6.5-0.8-9.7-1.2s-6.4-1-9.6-1.7c-0.8-0.2-1.7,0.4-1.9,1.2c-0.1,0.6,0.1,1.2,0.6,1.5
|
||||
l131.4,107.6c0.3,0.2,0.6,0.4,1,0.4c0.9,0,1.6-0.7,1.6-1.6c0-0.3-0.1-0.5-0.2-0.7l-36.9-70.5c13.6-0.8,27.2-2.6,40.6-5.5
|
||||
c13.8-3,27.4-7.1,40.5-12.4c12.8-5.1,25.2-11.3,36.9-18.6c11.5-7.1,22.3-15.2,32.3-24.2c10.5-9.4,19.9-20,28.2-31.4
|
||||
c7.8-10.7,14.3-22.3,19.6-34.4c5-11.7,8.7-23.9,10.9-36.3C347.8,202.7,348.5,190.3,347.8,177.8z"/>
|
||||
<path class="st1" d="M258.6,209.8l-61.7-28.5l61.4-95.5c0.5-0.8,0.3-1.8-0.5-2.2c-0.5-0.3-1.2-0.3-1.7,0l-154.8,96.3
|
||||
c-0.8,0.5-1,1.5-0.5,2.2c0.2,0.3,0.4,0.5,0.7,0.6l60.3,29.1l-90.3,122c-0.5,0.6-0.4,1.5,0.1,2.1c0.5,0.6,1.4,0.7,2.1,0.2
|
||||
l185.2-123.5c0.8-0.5,1-1.5,0.5-2.3C259.1,210.1,258.8,209.9,258.6,209.8L258.6,209.8z"/>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 2.2 KiB |