Docstr update (#460)

* parallel tuning docstr

* update n_concurrent_trials docstr

* n_jobs default

* parallel tuning in tune docstr
This commit is contained in:
Qingyun Wu 2022-02-15 12:41:53 -05:00 committed by GitHub
parent 393106d531
commit 05f9065ade
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 15 additions and 9 deletions

View File

@ -482,7 +482,8 @@ class AutoML(BaseEstimator):
task: A string of the task type, e.g.,
'classification', 'regression', 'ts_forecast', 'rank',
'seq-classification', 'seq-regression', 'summarization'.
n_jobs: An integer of the number of threads for training.
n_jobs: An integer of the number of threads for training | default=-1.
Use all available resources when n_jobs == -1.
log_file_name: A string of the log file name | default="". To disable logging,
set it to be an empty string "".
estimator_list: A list of strings for estimator names, or 'auto'
@ -562,8 +563,9 @@ class AutoML(BaseEstimator):
seed: int or None, default=None | The random seed for hpo.
n_concurrent_trials: [Experimental] int, default=1 | The number of
concurrent trials. For n_concurrent_trials > 1, installation of
ray is required: `pip install flaml[ray]`.
concurrent trials. When n_concurrent_trials > 1, flaml performes
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning)
and installation of ray is required: `pip install flaml[ray]`.
keep_search_state: boolean, default=False | Whether to keep data needed
for model search after fit(). By default the state is deleted for
space saving.
@ -1365,8 +1367,8 @@ class AutoML(BaseEstimator):
groups: None or array-like | Group labels (with matching length to
y_train) or groups counts (with sum equal to length of y_train)
for training data.
n_jobs: An integer of the number of threads for training. Use all
available resources when n_jobs == -1.
n_jobs: An integer of the number of threads for training | default=-1.
Use all available resources when n_jobs == -1.
train_best: A boolean of whether to train the best config in the
time budget; if false, train the last config in the budget.
train_full: A boolean of whether to train on the full data. If true,
@ -1827,7 +1829,8 @@ class AutoML(BaseEstimator):
task: A string of the task type, e.g.,
'classification', 'regression', 'ts_forecast', 'rank',
'seq-classification', 'seq-regression', 'summarization'
n_jobs: An integer of the number of threads for training.
n_jobs: An integer of the number of threads for training | default=-1.
Use all available resources when n_jobs == -1.
log_file_name: A string of the log file name | default="". To disable logging,
set it to be an empty string "".
estimator_list: A list of strings for estimator names, or 'auto'
@ -1920,8 +1923,9 @@ class AutoML(BaseEstimator):
seed: int or None, default=None | The random seed for hpo.
n_concurrent_trials: [Experimental] int, default=1 | The number of
concurrent trials. For n_concurrent_trials > 1, installation of
ray is required: `pip install flaml[ray]`.
concurrent trials. When n_concurrent_trials > 1, flaml performes
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning)
and installation of ray is required: `pip install flaml[ray]`.
keep_search_state: boolean, default=False | Whether to keep data needed
for model search after fit(). By default the state is deleted for
space saving.

View File

@ -256,7 +256,9 @@ def run(
used; or a local dir to save the tuning log.
num_samples: An integer of the number of configs to try. Defaults to 1.
resources_per_trial: A dictionary of the hardware resources to allocate
per trial, e.g., `{'cpu': 1}`. Only valid when using ray backend.
per trial, e.g., `{'cpu': 1}`. It is only valid when using ray backend
(by setting 'use_ray = True'). It shall be used when you need to do
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning).
config_constraints: A list of config constraints to be satisfied.
e.g., ```config_constraints = [(mem_size, '<=', 1024**3)]```