Amin Nayebi,
Alexander Munteanu,
Matthias Poloczek
;
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4752-4761, 2019.
Abstract
We present a theoretically founded approach for high-dimensional Bayesian optimization based on low-dimensional subspace embeddings. We prove that the error in the Gaussian process model is bounded tightly when going from the original high-dimensional search domain to the low-dimensional embedding. This implies that the optimization process in the low-dimensional embedding proceeds essentially as if it were run directly on an unknown active subspace of low dimensionality. The argument applies to a large class of algorithms and GP models, including non-stationary kernels. Moreover, we provide an efficient implementation based on hashing and demonstrate empirically that this subspace embedding achieves considerably better results than the previously proposed methods for high-dimensional BO based on Gaussian matrix projections and structure-learning.
@InProceedings{pmlr-v97-nayebi19a,
title = {A Framework for {B}ayesian Optimization in Embedded Subspaces},
author = {Nayebi, Amin and Munteanu, Alexander and Poloczek, Matthias},
booktitle = {Proceedings of the 36th International Conference on Machine Learning},
pages = {4752--4761},
year = {2019},
editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
volume = {97},
series = {Proceedings of Machine Learning Research},
address = {Long Beach, California, USA},
month = {09--15 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v97/nayebi19a/nayebi19a.pdf},
url = {http://proceedings.mlr.press/v97/nayebi19a.html},
abstract = {We present a theoretically founded approach for high-dimensional Bayesian optimization based on low-dimensional subspace embeddings. We prove that the error in the Gaussian process model is bounded tightly when going from the original high-dimensional search domain to the low-dimensional embedding. This implies that the optimization process in the low-dimensional embedding proceeds essentially as if it were run directly on an unknown active subspace of low dimensionality. The argument applies to a large class of algorithms and GP models, including non-stationary kernels. Moreover, we provide an efficient implementation based on hashing and demonstrate empirically that this subspace embedding achieves considerably better results than the previously proposed methods for high-dimensional BO based on Gaussian matrix projections and structure-learning.}
}
%0 Conference Paper
%T A Framework for Bayesian Optimization in Embedded Subspaces
%A Amin Nayebi
%A Alexander Munteanu
%A Matthias Poloczek
%B Proceedings of the 36th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2019
%E Kamalika Chaudhuri
%E Ruslan Salakhutdinov
%F pmlr-v97-nayebi19a
%I PMLR
%J Proceedings of Machine Learning Research
%P 4752--4761
%U http://proceedings.mlr.press
%V 97
%W PMLR
%X We present a theoretically founded approach for high-dimensional Bayesian optimization based on low-dimensional subspace embeddings. We prove that the error in the Gaussian process model is bounded tightly when going from the original high-dimensional search domain to the low-dimensional embedding. This implies that the optimization process in the low-dimensional embedding proceeds essentially as if it were run directly on an unknown active subspace of low dimensionality. The argument applies to a large class of algorithms and GP models, including non-stationary kernels. Moreover, we provide an efficient implementation based on hashing and demonstrate empirically that this subspace embedding achieves considerably better results than the previously proposed methods for high-dimensional BO based on Gaussian matrix projections and structure-learning.
Nayebi, A., Munteanu, A. & Poloczek, M.. (2019). A Framework for Bayesian Optimization in Embedded Subspaces. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:4752-4761
This site last compiled Mon, 16 Sep 2019 16:05:04 +0000