Vanilla Bayesian Optimization Performs Great in High Dimensions

Carl Hvarfner, Erik Orm Hellsten, Luigi Nardi
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:20793-20817, 2024.

Abstract

High-dimensional optimization problems have long been considered the Achilles’ heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make BO more performant in this setting, commonly by imposing various simplifying assumptions on the objective, thereby decreasing its presumed complexity. In this paper, we identify the degeneracies that make vanilla BO poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of model complexity. Motivated by the model complexity measure, we derive an enhancement to the prior assumptions that are typical of the vanilla BO algorithm, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior in the dimensionality - reveals that standard BO works drastically better than previously thought in high dimensions. Our insights are supplemented by substantial out-performance of existing state-of-the-art on multiple commonly considered real-world high-dimensional tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-hvarfner24a, title = {Vanilla {B}ayesian Optimization Performs Great in High Dimensions}, author = {Hvarfner, Carl and Hellsten, Erik Orm and Nardi, Luigi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {20793--20817}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/hvarfner24a/hvarfner24a.pdf}, url = {https://proceedings.mlr.press/v235/hvarfner24a.html}, abstract = {High-dimensional optimization problems have long been considered the Achilles’ heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make BO more performant in this setting, commonly by imposing various simplifying assumptions on the objective, thereby decreasing its presumed complexity. In this paper, we identify the degeneracies that make vanilla BO poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of model complexity. Motivated by the model complexity measure, we derive an enhancement to the prior assumptions that are typical of the vanilla BO algorithm, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior in the dimensionality - reveals that standard BO works drastically better than previously thought in high dimensions. Our insights are supplemented by substantial out-performance of existing state-of-the-art on multiple commonly considered real-world high-dimensional tasks.} }
Endnote
%0 Conference Paper %T Vanilla Bayesian Optimization Performs Great in High Dimensions %A Carl Hvarfner %A Erik Orm Hellsten %A Luigi Nardi %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-hvarfner24a %I PMLR %P 20793--20817 %U https://proceedings.mlr.press/v235/hvarfner24a.html %V 235 %X High-dimensional optimization problems have long been considered the Achilles’ heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make BO more performant in this setting, commonly by imposing various simplifying assumptions on the objective, thereby decreasing its presumed complexity. In this paper, we identify the degeneracies that make vanilla BO poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of model complexity. Motivated by the model complexity measure, we derive an enhancement to the prior assumptions that are typical of the vanilla BO algorithm, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior in the dimensionality - reveals that standard BO works drastically better than previously thought in high dimensions. Our insights are supplemented by substantial out-performance of existing state-of-the-art on multiple commonly considered real-world high-dimensional tasks.
APA
Hvarfner, C., Hellsten, E.O. & Nardi, L.. (2024). Vanilla Bayesian Optimization Performs Great in High Dimensions. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:20793-20817 Available from https://proceedings.mlr.press/v235/hvarfner24a.html.

Related Material