Bayesian Optimization under Stochastic Delayed Feedback

Arun Verma, Zhongxiang Dai, Bryan Kian Hsiang Low
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:22145-22167, 2022.

Abstract

Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization of complex and expensive-to-compute black-box functions. The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay. Such assumptions may not be practical in many real-life problems like online recommendations, clinical trials, and hyperparameter tuning where feedback is available after a random delay. To benefit from the experimental parallelization in these problems, the learner needs to start new function evaluations without waiting for delayed feedback. In this paper, we consider the BO under stochastic delayed feedback problem. We propose algorithms with sub-linear regret guarantees that efficiently address the dilemma of selecting new function queries while waiting for randomly delayed feedback. Building on our results, we also make novel contributions to batch BO and contextual Gaussian process bandits. Experiments on synthetic and real-life datasets verify the performance of our algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-verma22a, title = {{B}ayesian Optimization under Stochastic Delayed Feedback}, author = {Verma, Arun and Dai, Zhongxiang and Low, Bryan Kian Hsiang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {22145--22167}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/verma22a/verma22a.pdf}, url = {https://proceedings.mlr.press/v162/verma22a.html}, abstract = {Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization of complex and expensive-to-compute black-box functions. The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay. Such assumptions may not be practical in many real-life problems like online recommendations, clinical trials, and hyperparameter tuning where feedback is available after a random delay. To benefit from the experimental parallelization in these problems, the learner needs to start new function evaluations without waiting for delayed feedback. In this paper, we consider the BO under stochastic delayed feedback problem. We propose algorithms with sub-linear regret guarantees that efficiently address the dilemma of selecting new function queries while waiting for randomly delayed feedback. Building on our results, we also make novel contributions to batch BO and contextual Gaussian process bandits. Experiments on synthetic and real-life datasets verify the performance of our algorithms.} }
Endnote
%0 Conference Paper %T Bayesian Optimization under Stochastic Delayed Feedback %A Arun Verma %A Zhongxiang Dai %A Bryan Kian Hsiang Low %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-verma22a %I PMLR %P 22145--22167 %U https://proceedings.mlr.press/v162/verma22a.html %V 162 %X Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization of complex and expensive-to-compute black-box functions. The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay. Such assumptions may not be practical in many real-life problems like online recommendations, clinical trials, and hyperparameter tuning where feedback is available after a random delay. To benefit from the experimental parallelization in these problems, the learner needs to start new function evaluations without waiting for delayed feedback. In this paper, we consider the BO under stochastic delayed feedback problem. We propose algorithms with sub-linear regret guarantees that efficiently address the dilemma of selecting new function queries while waiting for randomly delayed feedback. Building on our results, we also make novel contributions to batch BO and contextual Gaussian process bandits. Experiments on synthetic and real-life datasets verify the performance of our algorithms.
APA
Verma, A., Dai, Z. & Low, B.K.H.. (2022). Bayesian Optimization under Stochastic Delayed Feedback. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:22145-22167 Available from https://proceedings.mlr.press/v162/verma22a.html.

Related Material