Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates

Vidyadhar Upadhya, P S Sastry
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:836-851, 2019.

Abstract

Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood. One of the terms in the gradient, which involves expectation w.r.t. the model distribution, is intractable and is obtained through an MCMC estimate. In this work we show that the Hessian of the log-likelihood can be written in terms of covariances of hidden and visible units and hence, all elements of the Hessian can also be estimated using the same MCMC samples with small extra computational costs. Since inverting the Hessian may be computationally expensive, we propose an algorithm that uses inverse of the diagonal approximation of the Hessian, instead. This essentially results in parameter-specific adaptive learning rates for the gradient descent process and improves the efficiency of learning RBMs compared to the standard methods. Specifically we show that using the inverse of diagonal approximation of Hessian in the stochastic DC (difference of convex functions) program approach results in very efficient learning of RBMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-upadhya19a, title = {Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates}, author = {Upadhya, Vidyadhar and Sastry, P S}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {836--851}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/upadhya19a/upadhya19a.pdf}, url = {https://proceedings.mlr.press/v101/upadhya19a.html}, abstract = {Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood. One of the terms in the gradient, which involves expectation w.r.t. the model distribution, is intractable and is obtained through an MCMC estimate. In this work we show that the Hessian of the log-likelihood can be written in terms of covariances of hidden and visible units and hence, all elements of the Hessian can also be estimated using the same MCMC samples with small extra computational costs. Since inverting the Hessian may be computationally expensive, we propose an algorithm that uses inverse of the diagonal approximation of the Hessian, instead. This essentially results in parameter-specific adaptive learning rates for the gradient descent process and improves the efficiency of learning RBMs compared to the standard methods. Specifically we show that using the inverse of diagonal approximation of Hessian in the stochastic DC (difference of convex functions) program approach results in very efficient learning of RBMs.} }
Endnote
%0 Conference Paper %T Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates %A Vidyadhar Upadhya %A P S Sastry %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-upadhya19a %I PMLR %P 836--851 %U https://proceedings.mlr.press/v101/upadhya19a.html %V 101 %X Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood. One of the terms in the gradient, which involves expectation w.r.t. the model distribution, is intractable and is obtained through an MCMC estimate. In this work we show that the Hessian of the log-likelihood can be written in terms of covariances of hidden and visible units and hence, all elements of the Hessian can also be estimated using the same MCMC samples with small extra computational costs. Since inverting the Hessian may be computationally expensive, we propose an algorithm that uses inverse of the diagonal approximation of the Hessian, instead. This essentially results in parameter-specific adaptive learning rates for the gradient descent process and improves the efficiency of learning RBMs compared to the standard methods. Specifically we show that using the inverse of diagonal approximation of Hessian in the stochastic DC (difference of convex functions) program approach results in very efficient learning of RBMs.
APA
Upadhya, V. & Sastry, P.S.. (2019). Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:836-851 Available from https://proceedings.mlr.press/v101/upadhya19a.html.

Related Material