Merge branch 'main' into cv_strategy

This commit is contained in:
zsk 2022-08-24 08:16:07 -04:00 committed by GitHub
commit f06d27aaac
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 7535 additions and 1948 deletions

View File

@ -20,7 +20,7 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v2
- name: Cache conda - name: Cache conda
uses: actions/cache@v1 uses: actions/cache@v3
with: with:
path: ~/conda_pkgs_dir path: ~/conda_pkgs_dir
key: conda-${{ matrix.os }}-python-${{ matrix.python-version }}-${{ hashFiles('environment.yml') }} key: conda-${{ matrix.os }}-python-${{ matrix.python-version }}-${{ hashFiles('environment.yml') }}

View File

@ -16,7 +16,7 @@ jobs:
working-directory: website working-directory: website
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: actions/setup-node@v2 - uses: actions/setup-node@v3
with: with:
node-version: 14.x node-version: 14.x
# cache: yarn # cache: yarn
@ -52,7 +52,7 @@ jobs:
working-directory: website working-directory: website
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: actions/setup-node@v2 - uses: actions/setup-node@v3
with: with:
node-version: 14.x node-version: 14.x
# cache: yarn # cache: yarn

View File

@ -94,7 +94,7 @@ You can find a detailed documentation about FLAML [here](https://microsoft.githu
In addition, you can find: In addition, you can find:
- Demo and tutorials of FLAML [here](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A). - [Talks](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A) and [tutorials](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) about FLAML.
- Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research). - Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research).

View File

@ -3107,7 +3107,9 @@ class AutoML(BaseEstimator):
if mlflow is not None and mlflow.active_run(): if mlflow is not None and mlflow.active_run():
with mlflow.start_run(nested=True): with mlflow.start_run(nested=True):
mlflow.log_metric("iter_counter", self._track_iter) mlflow.log_metric("iter_counter", self._track_iter)
if "intermediate_results" in search_state.metric_for_logging: if (search_state.metric_for_logging is not None) and (
"intermediate_results" in search_state.metric_for_logging
):
for each_entry in search_state.metric_for_logging[ for each_entry in search_state.metric_for_logging[
"intermediate_results" "intermediate_results"
]: ]:
@ -3117,6 +3119,7 @@ class AutoML(BaseEstimator):
"iter_counter", self._iter_per_learner[estimator] "iter_counter", self._iter_per_learner[estimator]
) )
del search_state.metric_for_logging["intermediate_results"] del search_state.metric_for_logging["intermediate_results"]
if search_state.metric_for_logging:
mlflow.log_metrics(search_state.metric_for_logging) mlflow.log_metrics(search_state.metric_for_logging)
mlflow.log_metric("trial_time", search_state.trial_time) mlflow.log_metric("trial_time", search_state.trial_time)
mlflow.log_metric("wall_clock_time", self._state.time_from_start) mlflow.log_metric("wall_clock_time", self._state.time_from_start)

View File

@ -1 +1 @@
__version__ = "1.0.11" __version__ = "1.0.12"

File diff suppressed because one or more lines are too long

View File

@ -154,6 +154,19 @@ def test_mlflow():
pass pass
# subprocess.check_call([sys.executable, "-m", "pip", "uninstall", "mlflow"]) # subprocess.check_call([sys.executable, "-m", "pip", "uninstall", "mlflow"])
from sklearn.datasets import load_iris
with mlflow.start_run():
automl = AutoML()
automl_settings = {
"time_budget": 2, # in seconds
"metric": "accuracy",
"task": "classification",
"log_file_name": "iris.log",
}
X_train, y_train = load_iris(return_X_y=True)
automl.fit(X_train=X_train, y_train=y_train, **automl_settings)
if __name__ == "__main__": if __name__ == "__main__":
test_automl(600) test_automl(600)

View File

@ -90,7 +90,7 @@ Then, you can use it just like you use the original `LGMBClassifier`. Your other
* Understand the use cases for [Task-oriented AutoML](Use-Cases/task-oriented-automl), [Tune user-defined function](Use-Cases/Tune-User-Defined-Function) and [Zero-shot AutoML](Use-Cases/Zero-Shot-AutoML). * Understand the use cases for [Task-oriented AutoML](Use-Cases/task-oriented-automl), [Tune user-defined function](Use-Cases/Tune-User-Defined-Function) and [Zero-shot AutoML](Use-Cases/Zero-Shot-AutoML).
* Find code examples under "Examples": from [AutoML - Classification](Examples/AutoML-Classification) to [Tune - PyTorch](Examples/Tune-PyTorch). * Find code examples under "Examples": from [AutoML - Classification](Examples/AutoML-Classification) to [Tune - PyTorch](Examples/Tune-PyTorch).
* Watch [video tutorials](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A). * Find [talks](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A) and [tutorials](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) about FLAML.
* Learn about [research](Research) around FLAML. * Learn about [research](Research) around FLAML.
* Refer to [SDK](reference/automl) and [FAQ](FAQ). * Refer to [SDK](reference/automl) and [FAQ](FAQ).

View File

@ -20,4 +20,4 @@ For technical details, please check our research publications.
* [Fair AutoML](https://arxiv.org/abs/2111.06495). Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2111.06495 (2021). * [Fair AutoML](https://arxiv.org/abs/2111.06495). Qingyun Wu, Chi Wang. ArXiv preprint arXiv:2111.06495 (2021).
* [Mining Robust Default Configurations for Resource-constrained AutoML](https://arxiv.org/abs/2202.09927). Moe Kayali, Chi Wang. ArXiv preprint arXiv:2202.09927 (2022). * [Mining Robust Default Configurations for Resource-constrained AutoML](https://arxiv.org/abs/2202.09927). Moe Kayali, Chi Wang. ArXiv preprint arXiv:2202.09927 (2022).
Many researchers and engineers have contributed to the technology development. In alphabetical order: Vijay Aski, Sebastien Bubeck, Surajit Chaudhuri, Kevin Chen, Yi Wei Chen, Nadiia Chepurko, Ofer Dekel, Alex Deng, Anshuman Dutt, Nicolo Fusi, Jianfeng Gao, Johannes Gehrke, Niklas Gustafsson, Silu Huang, Moe Kayali, Dongwoo Kim, Christian Konig, John Langford, Menghao Li, Mingqin Li, Susan Xueqing Liu, Zhe Liu, Naveen Gaur, Paul Mineiro, Vivek Narasayya, Jake Radzikowski, Marco Rossi, Amin Saied, Neil Tenenholtz, Olga Vrousgou, Chi Wang, Yue Wang, Markus Weimer, Qingyun Wu, Qiufeng Yin, Haozhe Zhang, Minjia Zhang, XiaoYun Zhang, Eric Zhu. Many researchers and engineers have contributed to the technology development. In alphabetical order: Vijay Aski, Sebastien Bubeck, Surajit Chaudhuri, Kevin Chen, Yi Wei Chen, Nadiia Chepurko, Ofer Dekel, Alex Deng, Anshuman Dutt, Nicolo Fusi, Jianfeng Gao, Johannes Gehrke, Niklas Gustafsson, Silu Huang, Moe Kayali, Dongwoo Kim, Christian Konig, John Langford, Menghao Li, Mingqin Li, Susan Xueqing Liu, Zhe Liu, Naveen Gaur, Paul Mineiro, Vivek Narasayya, Jake Radzikowski, Marco Rossi, Amin Saied, Neil Tenenholtz, Olga Vrousgou, Chi Wang, Yue Wang, Markus Weimer, Qingyun Wu, Qiufeng Yin, Haozhe Zhang, Minjia Zhang, XiaoYun Zhang, Eric Zhu, Rui Zhuang.