Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity

Sebastian Shenghong Tay, Chuan Sheng Foo, Urano Daisuke, Richalynn Leong, Bryan Kian Hsiang Low
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:21180-21204, 2022.

Abstract

In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-case expected value requires solving an expensive convex optimization problem. We develop a fast approximation of the worst-case expected value based on the notion of worst-case sensitivity that caters to arbitrary convex distribution distances. We provide a regret bound for our novel DRBO algorithm with the fast approximation, and empirically show it is competitive with that using the exact worst-case expected value while incurring significantly less computation time. In order to guide the choice of distribution distance to be used with DRBO, we show that our approximation implicitly optimizes an objective close to an interpretable risk-sensitive value.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-tay22a, title = {Efficient Distributionally Robust {B}ayesian Optimization with Worst-case Sensitivity}, author = {Tay, Sebastian Shenghong and Foo, Chuan Sheng and Daisuke, Urano and Leong, Richalynn and Low, Bryan Kian Hsiang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {21180--21204}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/tay22a/tay22a.pdf}, url = {https://proceedings.mlr.press/v162/tay22a.html}, abstract = {In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-case expected value requires solving an expensive convex optimization problem. We develop a fast approximation of the worst-case expected value based on the notion of worst-case sensitivity that caters to arbitrary convex distribution distances. We provide a regret bound for our novel DRBO algorithm with the fast approximation, and empirically show it is competitive with that using the exact worst-case expected value while incurring significantly less computation time. In order to guide the choice of distribution distance to be used with DRBO, we show that our approximation implicitly optimizes an objective close to an interpretable risk-sensitive value.} }
Endnote
%0 Conference Paper %T Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity %A Sebastian Shenghong Tay %A Chuan Sheng Foo %A Urano Daisuke %A Richalynn Leong %A Bryan Kian Hsiang Low %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-tay22a %I PMLR %P 21180--21204 %U https://proceedings.mlr.press/v162/tay22a.html %V 162 %X In distributionally robust Bayesian optimization (DRBO), an exact computation of the worst-case expected value requires solving an expensive convex optimization problem. We develop a fast approximation of the worst-case expected value based on the notion of worst-case sensitivity that caters to arbitrary convex distribution distances. We provide a regret bound for our novel DRBO algorithm with the fast approximation, and empirically show it is competitive with that using the exact worst-case expected value while incurring significantly less computation time. In order to guide the choice of distribution distance to be used with DRBO, we show that our approximation implicitly optimizes an objective close to an interpretable risk-sensitive value.
APA
Tay, S.S., Foo, C.S., Daisuke, U., Leong, R. & Low, B.K.H.. (2022). Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:21180-21204 Available from https://proceedings.mlr.press/v162/tay22a.html.

Related Material