Stochastic Variance-Reduced Hamilton Monte Carlo Methods

Difan Zou, Pan Xu, Quanquan Gu
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:6028-6037, 2018.

Abstract

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve $\epsilon$ accuracy in 2-Wasserstein distance, our algorithm achieves $\tilde O\big(n+\kappa^{2}d^{1/2}/\epsilon+\kappa^{4/3}d^{1/3}n^{2/3}/\epsilon^{2/3}\big)$ gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zou18a, title = {Stochastic Variance-Reduced {H}amilton {M}onte {C}arlo Methods}, author = {Zou, Difan and Xu, Pan and Gu, Quanquan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {6028--6037}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zou18a/zou18a.pdf}, url = {https://proceedings.mlr.press/v80/zou18a.html}, abstract = {We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve $\epsilon$ accuracy in 2-Wasserstein distance, our algorithm achieves $\tilde O\big(n+\kappa^{2}d^{1/2}/\epsilon+\kappa^{4/3}d^{1/3}n^{2/3}/\epsilon^{2/3}\big)$ gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.} }
Endnote
%0 Conference Paper %T Stochastic Variance-Reduced Hamilton Monte Carlo Methods %A Difan Zou %A Pan Xu %A Quanquan Gu %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zou18a %I PMLR %P 6028--6037 %U https://proceedings.mlr.press/v80/zou18a.html %V 80 %X We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve $\epsilon$ accuracy in 2-Wasserstein distance, our algorithm achieves $\tilde O\big(n+\kappa^{2}d^{1/2}/\epsilon+\kappa^{4/3}d^{1/3}n^{2/3}/\epsilon^{2/3}\big)$ gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.
APA
Zou, D., Xu, P. & Gu, Q.. (2018). Stochastic Variance-Reduced Hamilton Monte Carlo Methods. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:6028-6037 Available from https://proceedings.mlr.press/v80/zou18a.html.

Related Material