* Add md for faq; Update readme
* Update TRANSPARENCY_FAQS.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update TRANSPARENCY_FAQS.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Remove trailing space
* Fix trailing space issue
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
The link to the documentation's FAQ#code-execution was broken because the 'docs' directory was missing in the original URL.
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Fixed formating issue in the README
* Fixed the formating issue in the README
* Updated formatting as per review comments
* Refactor README.md to highlight use cases and features
* Updated README as per feedback
* Updated README as per feedback
---------
Co-authored-by: Al-Iqram Elahee <hridoy@Al-Iqrams-MacBook-Pro.local>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Updated README.md added required changes to previous pull
new changes :
1. A section containing citation to AutoGen and EcoOptiGen
2. Another section contain citation to MathChat
## Citation
[AutoGen](https://arxiv.org/abs/2308.08155).
AND [EcoOptiGen](https://arxiv.org/abs/2303.04673).
```
bibtex
@inproceedings{wu2023autogen,
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
year={2023},
eprint={2308.08155},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
bibtex
@inproceedings{wang2023EcoOptiGen,
title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},
author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},
year={2023},
booktitle={AutoML'23},
}
```
[MathChat](https://arxiv.org/abs/2306.01337).
```
bibtex
@inproceedings{wu2023empirical,
title={An Empirical Study on Challenging Math Problem Solving with GPT-4},
author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang},
year={2023},
booktitle={ArXiv preprint arXiv:2306.01337},
}
```
* Seperated AutoGen and EcoOptGen and removed 'bibtex'
## Citation
[AutoGen](https://arxiv.org/abs/2308.08155).
```
@inproceedings{wu2023autogen,
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
year={2023},
eprint={2308.08155},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
[EcoOptiGen](https://arxiv.org/abs/2303.04673).
```
@inproceedings{wang2023EcoOptiGen,
title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},
author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},
year={2023},
booktitle={AutoML'23},
}
```
* Improves clarity and fixes punctuation in README and Multi-agent documentation
* fix broken colab link to agentchat_groupchat_research.ipynb (others are fine)
* fix typos, improves readability
* fix bug for windows
* fix bug for windows
* more clear example
* link to example
* add test
* format
* comment
* fix assertion error
* fix test error and links
---------
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
* add readme
* migration headsup
* remove move date
* Update README.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update readme and AutoGen docs
* Update Autogen#notebook-examples, Add link to AutoGen arxiv
* Update website/docs/Use-Cases/Autogen.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update link
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* math utils in autogen
* cleanup
* code utils
* remove check function from code response
* comment out test
* GPT-4
* increase request timeout
* name
* logging and error handling
* better doc
* doc
* codegen optimized
* GPT series
* text
* no demo example
* math
* import openai
* import openai
* azure model name
* azure model name
* openai version
* generate assertion if necessary
* condition to generate assertions
* init region key
* rename
* comments about budget
* prompt
---------
Co-authored-by: Susan Xueqing Liu <liususan091219@users.noreply.github.com>
* improve max_valid_n and doc
* Update README.md
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
* newline at end of file
* doc
---------
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Co-authored-by: Susan Xueqing Liu <liususan091219@users.noreply.github.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Refactor into automl subpackage
Moved some of the packages into an automl subpackage to tidy before the
task-based refactor. This is in response to discussions with the group
and a comment on the first task-based PR.
Only changes here are moving subpackages and modules into the new
automl, fixing imports to work with this structure and fixing some
dependencies in setup.py.
* Fix doc building post automl subpackage refactor
* Fix broken links in website post automl subpackage refactor
* Fix broken links in website post automl subpackage refactor
* Remove vw from test deps as this is breaking the build
* Move default back to the top-level
I'd moved this to automl as that's where it's used internally, but had
missed that this is actually part of the public interface so makes sense
to live where it was.
* Re-add top level modules with deprecation warnings
flaml.data, flaml.ml and flaml.model are re-added to the top level,
being re-exported from flaml.automl for backwards compatability. Adding
a deprecation warning so that we can have a planned removal later.
* Fix model.py line-endings
* Pin pytorch-lightning to less than 1.8.0
We're seeing strange lightning related bugs from pytorch-forecasting
since the release of lightning 1.8.0. Going to try constraining this to
see if we have a fix.
* Fix the lightning version pin
Was optimistic with setting it in the 1.7.x range, but that isn't
compatible with python 3.6
* Remove lightning version pin
* Revert dependency version changes
* Minor change to retrigger the build
* Fix line endings in ml.py and model.py
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: EgorKraevTransferwise <egor.kraev@transferwise.com>
* support latest xgboost version
* Update test_classification.py
* Update
Exists problems when installing xgb1.6.1 in py3.6
* cleanup
* xgboost version
* remove time_budget_s in test
* remove redundancy
* stop support of python 3.6
Co-authored-by: zsk <shaokunzhang529@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* add logo
* update link to file
* Update README.md
* del png
* update website logo
* update icon
Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
Co-authored-by: 张少坤 <zhangshaokun@fuzhi.ai>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* make AutoML inherit sklearn.base.BaseEstimator such that it can be wrapped in sklearn.multioutput.MultiOutputRegressor for multi-output regression.
* moved and simplified preprocessing code in AutoML.predictI() to _preprocess()
if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO.
retrain if using ray.
update ITER_HP in config after a trial is finished.
change prophet logging level.
example and notebook update.
allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class.
remove model_history.
checkpoint bug fix.
* model_history meaning save_best_model_per_estimator
* ITER_HP
* example update
* prophet logging level
* comment update in forecast notebook
* print format improvement
* allow settings to be passed to AutoML constructor
* checkpoint bug fix
* time limit for autohf regression test
* skip slow test on macos
* cleanup before del
* Integrate multivariate time series forecasting, now supports
continuous and categorical variables
- update data.py to transform time series data
- update search space
- update documentations to reflect changes
- update test_forecast.py
- rename 'forecast' task to 'ts_forecast' task
* update automl.py and test_forecast.py
* update forecast notebook
* update README.md and setup.py
* update ml.py and test_forecast.py
- make "ds" and "y" constant variables
* replace constants with constant variables
* bump version to 0.7.0
* update setup.py
- support 'forecast' and 'ts_forecast'
* update automl.py and data.py
- support 'forecast' and 'ts_forecast' tasks
* warning -> info for low cost partial config
#195, #110
* when n_estimators < 0, use trained_estimator's
* log debug info
* test random seed
* remove "objective"; avoid ZeroDivisionError
* hp config to estimator params
* check type of searcher
* default n_jobs
* try import
* Update searchalgo_auto.py
* CLASSIFICATION
* auto_augment flag
* min_sample_size
* make catboost optional
* config in result
* value can be float
* pytorch notebook example
* docker, pre-commit
* max_failure (#192); early_stop
* extend starting_points (#196)
Co-authored-by: Chi Wang (MSR) <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qw2ky@virginia.edu>
* remove catboost training dir
* close#48
* bs for hierarchical space. close#85
* retrain for hierarchical space
* clean ml (#180)
Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
* support ranking task
* examples
* cv shuffle
* forecast api and implementation cleaner
* period constraints
* delete groups after fit
* remove extra comma
* exclusive bound
* log file name
* add cost to space
* dataset_format
* add load_openml_dataset test
* docstr
* revise test format
* simplify restore
* order categories
* openml server exception in test
* process space
* add warning
* log format
* reduce n_cpu
* nested space
* hierarchical search space for CFO
* non hierarchical for bs
* unflatten hierarchical config
* connection error
* random sample
* config signature
* check ray version
* preprocess numpy array
* catboost preprocess
* time budget
* seed, verbose, hpo_method
* test cfocat
* shallow copy in flatten_dict
prevent lgbm model duplication
* match estimator name
* quantize and log
* test qloguniform and qrandint
* test qlograndint
* thread.running
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
* api doc for chacha
* update params
* link to paper
* update dataset id
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: Qingyun Wu <qiw@microsoft.com>
* pickle the AutoML object
* get best model per estimator
* test deberta
* stateless API
* pickle the AutoML object
* get best model per estimator
* test deberta
* stateless API
* prevent divide by zero
* test roberta
* BlendSearchTuner
* sync
* version number
* update gitignore
* delta time
* reindex columns when dropping int-indexed columns
* add seed
* add seed in Args
* merge
* init upload of ChaCha
* remove redundancy
* add back catboost
* improve AutoVW API
* set min_resource_lease in VWOnlineTrial
* docstr
* rename
* docstr
* add docstr
* improve API and documentation
* fix name
* docstr
* naming
* remove max_resource in scheduler
* add TODO in flow2
* remove redundancy in rearcher
* add input type
* adapt code from ray.tune
* move files
* naming
* documentation
* fix import error
* fix format issues
* remove cb in worse than test
* improve _generate_all_comb
* remove ray tune
* naming
* VowpalWabbitTrial
* import error
* import error
* merge test code
* scheduler import
* fix import
* remove
* import, minor bug and version
* Float or Categorical
* fix default
* add test_autovw.py
* add vowpalwabbit and openml
* lint
* reorg
* lint
* indent
* add autovw notebook
* update notebook
* update log msg and autovw notebook
* update autovw notebook
* update autovw notebook
* add available strings for model_select_policy
* string for metric
* Update vw format in flaml/onlineml/trial.py
Co-authored-by: olgavrou <olgavrou@gmail.com>
* make init_config optional
* add _setup_trial_runner and update notebook
* space
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qiw@microsoft.com>
Co-authored-by: olgavrou <olgavrou@gmail.com>
* pickle the AutoML object
* get best model per estimator
* test deberta
* stateless API
* Add Gitter badge (#41)
* prevent divide by zero
* test roberta
* BlendSearchTuner
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: The Gitter Badger <badger@gitter.im>
* v0.2.2
separate the HPO part into the module flaml.tune
enhanced implementation of FLOW^2, CFO and BlendSearch
support parallel tuning using ray tune
add support for sample_weight and generic fit arguments
enable mlflow logging
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: qingyun-wu <qw2ky@virginia.edu>