[edit]
The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:533-540, 2015.
Abstract
Leveraging the coherent exploration of Hamiltonian flow, Hamiltonian Monte Carlo produces computationally efficient Monte Carlo estimators, even with respect to complex and high-dimensional target distributions. When confronted with data-intensive applications, however, the algorithm may be too expensive to implement, leaving us to consider the utility of approximations such as data subsampling. In this paper I demonstrate how data subsampling fundamentally compromises the scalability of Hamiltonian Monte Carlo.