[edit]
Pseudo-Bayesian Learning via Direct Loss Minimization with Applications to Sparse Gaussian Process Models
Proceedings of The 2nd Symposium on
Advances in Approximate Bayesian Inference, PMLR 118:1-18, 2020.
Abstract
We propose that approximate Bayesian algorithms should optimize a new criterion, directly derived from the loss, to calculate their approximate posterior which we refer to as pseudo-posterior. Unlike standard variational inference which optimizes a lower bound on the log marginal likelihood, the new algorithms can be analyzed to provide loss guarantees on the predictions with the pseudo-posterior. Our criterion can be used to derive new sparse Gaussian process algorithms that have error guarantees applicable to various likelihoods.