Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models

Rishit Sheth, Roni Khardon
Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, PMLR 118:1-18, 2020.

Abstract

We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior. Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior. Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v118-sheth20a, title = { Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models}, author = {Sheth, Rishit and Khardon, Roni}, booktitle = {Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference}, pages = {1--18}, year = {2020}, editor = {Zhang, Cheng and Ruiz, Francisco and Bui, Thang and Dieng, Adji Bousso and Liang, Dawen}, volume = {118}, series = {Proceedings of Machine Learning Research}, month = {08 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v118/sheth20a/sheth20a.pdf}, url = { http://proceedings.mlr.press/v118/sheth20a.html }, abstract = {We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior. Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior. Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods. } }
Endnote
%0 Conference Paper %T Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models %A Rishit Sheth %A Roni Khardon %B Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference %C Proceedings of Machine Learning Research %D 2020 %E Cheng Zhang %E Francisco Ruiz %E Thang Bui %E Adji Bousso Dieng %E Dawen Liang %F pmlr-v118-sheth20a %I PMLR %P 1--18 %U http://proceedings.mlr.press/v118/sheth20a.html %V 118 %X We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior. Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior. Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.
APA
Sheth, R. & Khardon, R.. (2020). Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models. Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, in Proceedings of Machine Learning Research 118:1-18 Available from http://proceedings.mlr.press/v118/sheth20a.html .

Related Material