Scalable and Robust Bayesian Inference via the Median Posterior

[edit]

Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, David Dunson ;
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1656-1664, 2014.

Abstract

Many Bayesian learning methods for massive data benefit from working with small subsets of observations. In particular, significant progress has been made in scalable Bayesian learning via stochastic approximation. However, Bayesian learning methods in distributed computing environments are often problem- or distribution-specific and use ad hoc techniques. We propose a novel general approach to Bayesian inference that is scalable and robust to corruption in the data. Our technique is based on the idea of splitting the data into several non-overlapping subgroups, evaluating the posterior distribution given each independent subgroup, and then combining the results. The main novelty is the proposed aggregation step which is based on finding the geometric median of posterior distributions. We present both theoretical and numerical results illustrating the advantages of our approach.

Related Material