An Adaptive Learning Rate for Stochastic Variational Inference

Rajesh Ranganath, Chong Wang, Blei David, Eric Xing
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(2):298-306, 2013.

Abstract

Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-ranganath13, title = {An Adaptive Learning Rate for Stochastic Variational Inference}, author = {Rajesh Ranganath and Chong Wang and Blei David and Eric Xing}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {298--306}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/ranganath13.pdf}, url = {http://proceedings.mlr.press/v28/ranganath13.html}, abstract = {Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.} }
Endnote
%0 Conference Paper %T An Adaptive Learning Rate for Stochastic Variational Inference %A Rajesh Ranganath %A Chong Wang %A Blei David %A Eric Xing %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-ranganath13 %I PMLR %J Proceedings of Machine Learning Research %P 298--306 %U http://proceedings.mlr.press %V 28 %N 2 %W PMLR %X Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.
RIS
TY - CPAPER TI - An Adaptive Learning Rate for Stochastic Variational Inference AU - Rajesh Ranganath AU - Chong Wang AU - Blei David AU - Eric Xing BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-ranganath13 PB - PMLR SP - 298 DP - PMLR EP - 306 L1 - http://proceedings.mlr.press/v28/ranganath13.pdf UR - http://proceedings.mlr.press/v28/ranganath13.html AB - Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates. ER -
APA
Ranganath, R., Wang, C., David, B. & Xing, E.. (2013). An Adaptive Learning Rate for Stochastic Variational Inference. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(2):298-306

Related Material