Learning RBM with a DC programming Approach
[edit]
Proceedings of the Ninth Asian Conference on Machine Learning, PMLR 77:498513, 2017.
Abstract
By exploiting the property that the RBM loglikelihood function is the difference of convex functions, we formulate a stochastic variant of the difference of convex functions (DC) programming to minimize the negative loglikelihood. Interestingly, the traditional contrastive divergence algorithm is a special case of the above formulation and the hyperparameters of the two algorithms can be chosen such that the amount of computation per minibatch is identical. We show that for a given computational budget the proposed algorithm almost always reaches a higher loglikelihood more rapidly, compared to the standard contrastive divergence algorithm. Further, we modify this algorithm to use the centered gradients and show that it is more efficient and effective compared to the standard centered gradient algorithm on benchmark datasets.
Related Material


