Improving sample efficiency of high dimensional Bayesian optimization with MCMC

Zeji Yi, Yunyue Wei, Chu Xin Cheng, Kaibo He, Yanan Sui
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:813-824, 2024.

Abstract

Sequential optimization methods are often confronted with the curse of dimensionality in high-dimensional spaces. Current approaches under the Gaussian process framework are still burdened by the computational complexity of tracking Gaussian process posteriors and need to partition the optimization problem into small regions to ensure exploration or assume an underlying low-dimensional structure. With the idea of transiting the candidate points towards more promising positions, we propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior. We provide theoretical guarantees of its convergence in the Gaussian process Thompson sampling setting. We also show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-yi24a, title = {Improving sample efficiency of high dimensional {B}ayesian optimization with {MCMC}}, author = {Yi, Zeji and Wei, Yunyue and Cheng, Chu Xin and He, Kaibo and Sui, Yanan}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {813--824}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/yi24a/yi24a.pdf}, url = {https://proceedings.mlr.press/v242/yi24a.html}, abstract = {Sequential optimization methods are often confronted with the curse of dimensionality in high-dimensional spaces. Current approaches under the Gaussian process framework are still burdened by the computational complexity of tracking Gaussian process posteriors and need to partition the optimization problem into small regions to ensure exploration or assume an underlying low-dimensional structure. With the idea of transiting the candidate points towards more promising positions, we propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior. We provide theoretical guarantees of its convergence in the Gaussian process Thompson sampling setting. We also show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.} }
Endnote
%0 Conference Paper %T Improving sample efficiency of high dimensional Bayesian optimization with MCMC %A Zeji Yi %A Yunyue Wei %A Chu Xin Cheng %A Kaibo He %A Yanan Sui %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-yi24a %I PMLR %P 813--824 %U https://proceedings.mlr.press/v242/yi24a.html %V 242 %X Sequential optimization methods are often confronted with the curse of dimensionality in high-dimensional spaces. Current approaches under the Gaussian process framework are still burdened by the computational complexity of tracking Gaussian process posteriors and need to partition the optimization problem into small regions to ensure exploration or assume an underlying low-dimensional structure. With the idea of transiting the candidate points towards more promising positions, we propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior. We provide theoretical guarantees of its convergence in the Gaussian process Thompson sampling setting. We also show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.
APA
Yi, Z., Wei, Y., Cheng, C.X., He, K. & Sui, Y.. (2024). Improving sample efficiency of high dimensional Bayesian optimization with MCMC. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:813-824 Available from https://proceedings.mlr.press/v242/yi24a.html.

Related Material