Distributionally Robust Bayesian Quadrature Optimization

Thanh Nguyen, Sunil Gupta, Huong Ha, Santu Rana, Svetha Venkatesh
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1921-1931, 2020.

Abstract

Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-nguyen20a, title = {Distributionally Robust Bayesian Quadrature Optimization}, author = {Nguyen, Thanh and Gupta, Sunil and Ha, Huong and Rana, Santu and Venkatesh, Svetha}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1921--1931}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/nguyen20a/nguyen20a.pdf}, url = {https://proceedings.mlr.press/v108/nguyen20a.html}, abstract = {Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.} }
Endnote
%0 Conference Paper %T Distributionally Robust Bayesian Quadrature Optimization %A Thanh Nguyen %A Sunil Gupta %A Huong Ha %A Santu Rana %A Svetha Venkatesh %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-nguyen20a %I PMLR %P 1921--1931 %U https://proceedings.mlr.press/v108/nguyen20a.html %V 108 %X Bayesian quadrature optimization (BQO) maximizes the expectation of an expensive black-box integrand taken over a known probability distribution. In this work, we study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d samples. A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set. Though Monte Carlo estimate is unbiased, it has high variance given a small set of samples; thus can result in a spurious objective function. We adopt the distributionally robust optimization perspective to this problem by maximizing the expected objective under the most adversarial distribution. In particular, we propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose. We demonstrate the empirical effectiveness of our proposed framework in synthetic and real-world problems, and characterize its theoretical convergence via Bayesian regret.
APA
Nguyen, T., Gupta, S., Ha, H., Rana, S. & Venkatesh, S.. (2020). Distributionally Robust Bayesian Quadrature Optimization. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1921-1931 Available from https://proceedings.mlr.press/v108/nguyen20a.html.

Related Material