Asynchronous $ε$-Greedy Bayesian Optimisation

George De Ath, Richard M. Everson, Jonathan E. Fieldsend
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:578-588, 2021.

Abstract

Batch Bayesian optimisation (BO) is a successful technique for the optimisation of expensive black-box functions. Asynchronous BO can reduce wallclock time by starting a new evaluation as soon as another finishes, thus maximising resource utilisation. To maximise resource allocation, we develop a novel asynchronous BO method, AEGiS (Asynchronous $\epsilon$-Greedy Global Search) that combines greedy search, exploiting the surrogate’s mean prediction, with Thompson sampling and random selection from the approximate Pareto set describing the trade-off between exploitation (surrogate mean prediction) and exploration (surrogate posterior variance). We demonstrate empirically the efficacy of AEGiS on synthetic benchmark problems, meta-surrogate hyperparameter tuning problems and real-world problems, showing that AEGiS generally outperforms existing methods for asynchronous BO. When a single worker is available performance is no worse than BO using expected improvement.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-de-ath21a, title = {Asynchronous $ε$-Greedy Bayesian Optimisation}, author = {De Ath, George and Everson, Richard M. and Fieldsend, Jonathan E.}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {578--588}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/de-ath21a/de-ath21a.pdf}, url = {https://proceedings.mlr.press/v161/de-ath21a.html}, abstract = {Batch Bayesian optimisation (BO) is a successful technique for the optimisation of expensive black-box functions. Asynchronous BO can reduce wallclock time by starting a new evaluation as soon as another finishes, thus maximising resource utilisation. To maximise resource allocation, we develop a novel asynchronous BO method, AEGiS (Asynchronous $\epsilon$-Greedy Global Search) that combines greedy search, exploiting the surrogate’s mean prediction, with Thompson sampling and random selection from the approximate Pareto set describing the trade-off between exploitation (surrogate mean prediction) and exploration (surrogate posterior variance). We demonstrate empirically the efficacy of AEGiS on synthetic benchmark problems, meta-surrogate hyperparameter tuning problems and real-world problems, showing that AEGiS generally outperforms existing methods for asynchronous BO. When a single worker is available performance is no worse than BO using expected improvement.} }
Endnote
%0 Conference Paper %T Asynchronous $ε$-Greedy Bayesian Optimisation %A George De Ath %A Richard M. Everson %A Jonathan E. Fieldsend %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-de-ath21a %I PMLR %P 578--588 %U https://proceedings.mlr.press/v161/de-ath21a.html %V 161 %X Batch Bayesian optimisation (BO) is a successful technique for the optimisation of expensive black-box functions. Asynchronous BO can reduce wallclock time by starting a new evaluation as soon as another finishes, thus maximising resource utilisation. To maximise resource allocation, we develop a novel asynchronous BO method, AEGiS (Asynchronous $\epsilon$-Greedy Global Search) that combines greedy search, exploiting the surrogate’s mean prediction, with Thompson sampling and random selection from the approximate Pareto set describing the trade-off between exploitation (surrogate mean prediction) and exploration (surrogate posterior variance). We demonstrate empirically the efficacy of AEGiS on synthetic benchmark problems, meta-surrogate hyperparameter tuning problems and real-world problems, showing that AEGiS generally outperforms existing methods for asynchronous BO. When a single worker is available performance is no worse than BO using expected improvement.
APA
De Ath, G., Everson, R.M. & Fieldsend, J.E.. (2021). Asynchronous $ε$-Greedy Bayesian Optimisation. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:578-588 Available from https://proceedings.mlr.press/v161/de-ath21a.html.

Related Material