Safe-Bayesian Generalized Linear Regression

Rianne Heide, Alisa Kirichenko, Peter Grunwald, Nishant Mehta
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2623-2633, 2020.

Abstract

We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but useful’. Generalized Bayes equips the likelihood with a learning rate $\eta$. We show that for generalized linear models (GLMs), $\eta$-generalized Bayes concentrates around the best approximation of the truth within the model for specific $\eta eq 1$, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-heide20a, title = { Safe-Bayesian Generalized Linear Regression}, author = {de Heide, Rianne and Kirichenko, Alisa and Grunwald, Peter and Mehta, Nishant}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2623--2633}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/heide20a/heide20a.pdf}, url = {https://proceedings.mlr.press/v108/heide20a.html}, abstract = {We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but useful’. Generalized Bayes equips the likelihood with a learning rate $\eta$. We show that for generalized linear models (GLMs), $\eta$-generalized Bayes concentrates around the best approximation of the truth within the model for specific $\eta eq 1$, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes.} }
Endnote
%0 Conference Paper %T Safe-Bayesian Generalized Linear Regression %A Rianne Heide %A Alisa Kirichenko %A Peter Grunwald %A Nishant Mehta %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-heide20a %I PMLR %P 2623--2633 %U https://proceedings.mlr.press/v108/heide20a.html %V 108 %X We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but useful’. Generalized Bayes equips the likelihood with a learning rate $\eta$. We show that for generalized linear models (GLMs), $\eta$-generalized Bayes concentrates around the best approximation of the truth within the model for specific $\eta eq 1$, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes.
APA
Heide, R., Kirichenko, A., Grunwald, P. & Mehta, N.. (2020). Safe-Bayesian Generalized Linear Regression. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2623-2633 Available from https://proceedings.mlr.press/v108/heide20a.html.

Related Material