Batch Bayesian Optimization via Local Penalization

Javier Gonzalez, Zhenwen Dai, Philipp Hennig, Neil Lawrence
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:648-657, 2016.

Abstract

The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-gonzalez16a, title = {Batch Bayesian Optimization via Local Penalization}, author = {Gonzalez, Javier and Dai, Zhenwen and Hennig, Philipp and Lawrence, Neil}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {648--657}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/gonzalez16a.pdf}, url = {http://proceedings.mlr.press/v51/gonzalez16a.html}, abstract = {The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.} }
Endnote
%0 Conference Paper %T Batch Bayesian Optimization via Local Penalization %A Javier Gonzalez %A Zhenwen Dai %A Philipp Hennig %A Neil Lawrence %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-gonzalez16a %I PMLR %P 648--657 %U http://proceedings.mlr.press/v51/gonzalez16a.html %V 51 %X The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.
RIS
TY - CPAPER TI - Batch Bayesian Optimization via Local Penalization AU - Javier Gonzalez AU - Zhenwen Dai AU - Philipp Hennig AU - Neil Lawrence BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-gonzalez16a PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 648 EP - 657 L1 - http://proceedings.mlr.press/v51/gonzalez16a.pdf UR - http://proceedings.mlr.press/v51/gonzalez16a.html AB - The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives. ER -
APA
Gonzalez, J., Dai, Z., Hennig, P. & Lawrence, N.. (2016). Batch Bayesian Optimization via Local Penalization. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:648-657 Available from http://proceedings.mlr.press/v51/gonzalez16a.html.

Related Material