Automated Model Selection with Bayesian Quadrature

Henry Chai, Jean-Francois Ton, Michael A. Osborne, Roman Garnett
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:931-940, 2019.

Abstract

We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Although previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior model probability. Our technique maximizes the mutual information between this quantity and observations of the models’ likelihoods, yielding efficient sample acquisition across disparate model spaces when likelihood observations are limited. Our method produces more-accurate posterior estimates using fewer likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-chai19a, title = {Automated Model Selection with {B}ayesian Quadrature}, author = {Chai, Henry and Ton, Jean-Francois and Osborne, Michael A. and Garnett, Roman}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {931--940}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/chai19a/chai19a.pdf}, url = {https://proceedings.mlr.press/v97/chai19a.html}, abstract = {We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Although previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior model probability. Our technique maximizes the mutual information between this quantity and observations of the models’ likelihoods, yielding efficient sample acquisition across disparate model spaces when likelihood observations are limited. Our method produces more-accurate posterior estimates using fewer likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples.} }
Endnote
%0 Conference Paper %T Automated Model Selection with Bayesian Quadrature %A Henry Chai %A Jean-Francois Ton %A Michael A. Osborne %A Roman Garnett %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-chai19a %I PMLR %P 931--940 %U https://proceedings.mlr.press/v97/chai19a.html %V 97 %X We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Although previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior model probability. Our technique maximizes the mutual information between this quantity and observations of the models’ likelihoods, yielding efficient sample acquisition across disparate model spaces when likelihood observations are limited. Our method produces more-accurate posterior estimates using fewer likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples.
APA
Chai, H., Ton, J., Osborne, M.A. & Garnett, R.. (2019). Automated Model Selection with Bayesian Quadrature. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:931-940 Available from https://proceedings.mlr.press/v97/chai19a.html.

Related Material