Unbounded Bayesian Optimization via Regularization

[edit]

Bobak Shahriari, Alexandre Bouchard-Cote, Nando Freitas ;
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1168-1176, 2016.

Abstract

Bayesian optimization has recently emerged as a powerful and flexible tool in machine learning for hyperparameter tuning and more generally for the efficient global optimization of expensive black box functions. The established practice requires a user-defined bounded domain, which is assumed to contain the global optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such a domain. In this work, we modify the standard Bayesian optimization framework in a principled way to allow for unconstrained exploration of the search space. We introduce a new alternative method and compare it to a volume doubling baseline on two common synthetic benchmarking test functions. Finally, we apply our proposed methods on the task of tuning the stochastic gradient descent optimizer for both a multi-layered perceptron and a convolutional neural network on the MNIST dataset.

Related Material