A Generalization Bound for Online Variational Inference

Badr-Eddine Chérief-Abdellatif, Pierre Alquier, Mohammad Emtiyaz Khan
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:662-677, 2019.

Abstract

Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even with model mismatch and adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference ? In this paper, we show that this is indeed the case for some variational inference (VI) algorithms. We consider a few existing online, tempered VI algorithms, as well as a new algorithm, and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that the result should hold more generally and present empirical evidence in support of this. Our work in this paper presents theoretical justifications in favor of online algorithms relying on approximate Bayesian methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-cherief-abdellatif19a, title = {A Generalization Bound for Online Variational Inference}, author = {Ch\'erief-Abdellatif, Badr-Eddine and Alquier, Pierre and Khan, Mohammad Emtiyaz}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {662--677}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/cherief-abdellatif19a/cherief-abdellatif19a.pdf}, url = {https://proceedings.mlr.press/v101/cherief-abdellatif19a.html}, abstract = {Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even with model mismatch and adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference ? In this paper, we show that this is indeed the case for some variational inference (VI) algorithms. We consider a few existing online, tempered VI algorithms, as well as a new algorithm, and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that the result should hold more generally and present empirical evidence in support of this. Our work in this paper presents theoretical justifications in favor of online algorithms relying on approximate Bayesian methods.} }
Endnote
%0 Conference Paper %T A Generalization Bound for Online Variational Inference %A Badr-Eddine Chérief-Abdellatif %A Pierre Alquier %A Mohammad Emtiyaz Khan %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-cherief-abdellatif19a %I PMLR %P 662--677 %U https://proceedings.mlr.press/v101/cherief-abdellatif19a.html %V 101 %X Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers generalization guarantees which hold even with model mismatch and adversaries. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, but do such methods preserve the generalization properties of Bayesian inference ? In this paper, we show that this is indeed the case for some variational inference (VI) algorithms. We consider a few existing online, tempered VI algorithms, as well as a new algorithm, and derive their generalization bounds. Our theoretical result relies on the convexity of the variational objective, but we argue that the result should hold more generally and present empirical evidence in support of this. Our work in this paper presents theoretical justifications in favor of online algorithms relying on approximate Bayesian methods.
APA
Chérief-Abdellatif, B., Alquier, P. & Khan, M.E.. (2019). A Generalization Bound for Online Variational Inference. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:662-677 Available from https://proceedings.mlr.press/v101/cherief-abdellatif19a.html.

Related Material