Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions

Amrit Singh Bedi, Dheeraj Peddireddy, Vaneet Aggarwal, Alec Koppel
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:924-934, 2020.

Abstract

Bayesian optimization is a framework for global search via maximum a posteriori updates rather than simulated annealing, and has gained prominence in tuning the hyper-parameters of machine learning algorithms and more broadly, in decision-making under uncertainty. In this work, we cast Bayesian optimization as a multi-armed bandit problem, where the payoff function is sampled from a Gaussian process (GP). Further, we focus on action selections via the GP upper confidence bound (UCB). While numerous prior works use GPs in bandit settings, they do not apply to settings where the total number of iterations $T$ may be large-scale, as the complexity of computing the posterior parameters scales cubically with the number of past observations. To circumvent this computational burden, we propose a simple statistical test: only incorporate an action into the GP posterior when its conditional entropy exceeds an $\epsilon$ threshold. Doing so permits us to derive sublinear regret bounds of GP bandit algorithms up to factors depending on the compression parameter $\epsilon$ for both discrete and continuous action sets. Moreover, the complexity of the GP posterior remains provably finite. Experimentally, we observe state of the art accuracy and complexity tradeoffs for GP bandit algorithms on various hyper-parameter tuning tasks, suggesting the merits of managing the complexity of GPs in bandit settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-bedi20a, title = {Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions}, author = {Bedi, Amrit Singh and Peddireddy, Dheeraj and Aggarwal, Vaneet and Koppel, Alec}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {924--934}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/bedi20a/bedi20a.pdf}, url = {https://proceedings.mlr.press/v120/bedi20a.html}, abstract = {Bayesian optimization is a framework for global search via maximum a posteriori updates rather than simulated annealing, and has gained prominence in tuning the hyper-parameters of machine learning algorithms and more broadly, in decision-making under uncertainty. In this work, we cast Bayesian optimization as a multi-armed bandit problem, where the payoff function is sampled from a Gaussian process (GP). Further, we focus on action selections via the GP upper confidence bound (UCB). While numerous prior works use GPs in bandit settings, they do not apply to settings where the total number of iterations $T$ may be large-scale, as the complexity of computing the posterior parameters scales cubically with the number of past observations. To circumvent this computational burden, we propose a simple statistical test: only incorporate an action into the GP posterior when its conditional entropy exceeds an $\epsilon$ threshold. Doing so permits us to derive sublinear regret bounds of GP bandit algorithms up to factors depending on the compression parameter $\epsilon$ for both discrete and continuous action sets. Moreover, the complexity of the GP posterior remains provably finite. Experimentally, we observe state of the art accuracy and complexity tradeoffs for GP bandit algorithms on various hyper-parameter tuning tasks, suggesting the merits of managing the complexity of GPs in bandit settings.} }
Endnote
%0 Conference Paper %T Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions %A Amrit Singh Bedi %A Dheeraj Peddireddy %A Vaneet Aggarwal %A Alec Koppel %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-bedi20a %I PMLR %P 924--934 %U https://proceedings.mlr.press/v120/bedi20a.html %V 120 %X Bayesian optimization is a framework for global search via maximum a posteriori updates rather than simulated annealing, and has gained prominence in tuning the hyper-parameters of machine learning algorithms and more broadly, in decision-making under uncertainty. In this work, we cast Bayesian optimization as a multi-armed bandit problem, where the payoff function is sampled from a Gaussian process (GP). Further, we focus on action selections via the GP upper confidence bound (UCB). While numerous prior works use GPs in bandit settings, they do not apply to settings where the total number of iterations $T$ may be large-scale, as the complexity of computing the posterior parameters scales cubically with the number of past observations. To circumvent this computational burden, we propose a simple statistical test: only incorporate an action into the GP posterior when its conditional entropy exceeds an $\epsilon$ threshold. Doing so permits us to derive sublinear regret bounds of GP bandit algorithms up to factors depending on the compression parameter $\epsilon$ for both discrete and continuous action sets. Moreover, the complexity of the GP posterior remains provably finite. Experimentally, we observe state of the art accuracy and complexity tradeoffs for GP bandit algorithms on various hyper-parameter tuning tasks, suggesting the merits of managing the complexity of GPs in bandit settings.
APA
Bedi, A.S., Peddireddy, D., Aggarwal, V. & Koppel, A.. (2020). Efficient Large-Scale Gaussian Process Bandits by Believing only Informative Actions. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:924-934 Available from https://proceedings.mlr.press/v120/bedi20a.html.

Related Material