[edit]
Learning Parts-based Representations with Nonnegative Restricted Boltzmann Machine
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:133-148, 2013.
Abstract
The success of any machine learning system depends critically on effective representations of data. In many cases, especially those in vision, it is desirable that a representation scheme uncovers the parts-based, additive nature of the data. Of current representation learning schemes, restricted Boltzmann machines (RBMs) have proved to be highly effective in unsupervised settings. However, when it comes to parts-based discovery, RBMs do not usually produce satisfactory results. We enhance such capacity of RBMs by introducing nonnegativity into the model weights, resulting in a variant called \emphnonnegative restricted Boltzmann machine (NRBM). The NRBM produces not only controllable decomposition of data into interpretable parts but also offers a way to estimate the intrinsic nonlinear dimensionality of data. We demonstrate the capacity of our model on well-known datasets of handwritten digits, faces and documents. The decomposition quality on images is comparable with or better than what produced by the nonnegative matrix factorisation (NMF), and the thematic features uncovered from text are qualitatively interpretable in a similar manner to that of the latent Dirichlet allocation (LDA). However, the learnt features, when used for classification, are more discriminative than those discovered by both NMF and LDA and comparable with those by RBM.