Batch Normalized Deep Boltzmann Machines

Hung Vu, Tu Dinh Nguyen, Trung Le, Wei Luo, Dinh Phung
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:359-374, 2018.

Abstract

Training Deep Boltzmann Machines (DBMs) is a challenging task in deep generative model studies. The careless training usually leads to a divergence or a useless model. We discover that this phenomenon is due to the change of DBM layers’ input signals during model parameter updates, similar to other deterministic deep networks such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs). The change of layers’ input distributions not only complicates the learning process but also causes redundant neurons that simply imitate the others’ behaviors. Although this phenomenon can be coped using batch normalization in deep learning, integrating this technique into the probabilistic network of DBMs is a challenging problem since it has to satisfy two conditions of energy function and conditional probabilities. In this paper, we introduce Batch Normalized Deep Boltzmann Machines (BNDBMs) that meet both aforementioned conditions and successfully combine batch normalization and DBMs into the same framework. However, unlike CNNs, due to the probabilistic nature of DBMs, training DBMs with batch normalization has some differences: i) fixing shift parameters $\bnshift$ but learning scale parameters $\bnscale$; ii) avoiding normalizing the first hidden layer and iii) maintaining multiple pairs of population means and variances per neuron rather than one pair in CNNs. We observe that our proposed BNDBMs can stabilize the input signals of network layers and facilitate the training process as well as improve the model quality. More interestingly, BNDBMs can be trained successfully without pretraining, which is usually a mandatory step in most existing DBMs. The experimental results in MNIST, Fashion-MNIST and Caltech 101 Silhouette datasets show that our BNDBMs outperform DBMs and centered DBMs in terms of feature representation and classification accuracy ($3.98%$ and $5.84%$ average improvement for pretraining and no pretraining respectively).

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-vu18a, title = {Batch Normalized Deep Boltzmann Machines}, author = {Vu, Hung and Nguyen, {Tu Dinh} and Le, Trung and Luo, Wei and Phung, Dinh}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {359--374}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/vu18a/vu18a.pdf}, url = {https://proceedings.mlr.press/v95/vu18a.html}, abstract = {Training Deep Boltzmann Machines (DBMs) is a challenging task in deep generative model studies. The careless training usually leads to a divergence or a useless model. We discover that this phenomenon is due to the change of DBM layers’ input signals during model parameter updates, similar to other deterministic deep networks such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs). The change of layers’ input distributions not only complicates the learning process but also causes redundant neurons that simply imitate the others’ behaviors. Although this phenomenon can be coped using batch normalization in deep learning, integrating this technique into the probabilistic network of DBMs is a challenging problem since it has to satisfy two conditions of energy function and conditional probabilities. In this paper, we introduce Batch Normalized Deep Boltzmann Machines (BNDBMs) that meet both aforementioned conditions and successfully combine batch normalization and DBMs into the same framework. However, unlike CNNs, due to the probabilistic nature of DBMs, training DBMs with batch normalization has some differences: i) fixing shift parameters $\bnshift$ but learning scale parameters $\bnscale$; ii) avoiding normalizing the first hidden layer and iii) maintaining multiple pairs of population means and variances per neuron rather than one pair in CNNs. We observe that our proposed BNDBMs can stabilize the input signals of network layers and facilitate the training process as well as improve the model quality. More interestingly, BNDBMs can be trained successfully without pretraining, which is usually a mandatory step in most existing DBMs. The experimental results in MNIST, Fashion-MNIST and Caltech 101 Silhouette datasets show that our BNDBMs outperform DBMs and centered DBMs in terms of feature representation and classification accuracy ($3.98%$ and $5.84%$ average improvement for pretraining and no pretraining respectively).} }
Endnote
%0 Conference Paper %T Batch Normalized Deep Boltzmann Machines %A Hung Vu %A Tu Dinh Nguyen %A Trung Le %A Wei Luo %A Dinh Phung %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-vu18a %I PMLR %P 359--374 %U https://proceedings.mlr.press/v95/vu18a.html %V 95 %X Training Deep Boltzmann Machines (DBMs) is a challenging task in deep generative model studies. The careless training usually leads to a divergence or a useless model. We discover that this phenomenon is due to the change of DBM layers’ input signals during model parameter updates, similar to other deterministic deep networks such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs). The change of layers’ input distributions not only complicates the learning process but also causes redundant neurons that simply imitate the others’ behaviors. Although this phenomenon can be coped using batch normalization in deep learning, integrating this technique into the probabilistic network of DBMs is a challenging problem since it has to satisfy two conditions of energy function and conditional probabilities. In this paper, we introduce Batch Normalized Deep Boltzmann Machines (BNDBMs) that meet both aforementioned conditions and successfully combine batch normalization and DBMs into the same framework. However, unlike CNNs, due to the probabilistic nature of DBMs, training DBMs with batch normalization has some differences: i) fixing shift parameters $\bnshift$ but learning scale parameters $\bnscale$; ii) avoiding normalizing the first hidden layer and iii) maintaining multiple pairs of population means and variances per neuron rather than one pair in CNNs. We observe that our proposed BNDBMs can stabilize the input signals of network layers and facilitate the training process as well as improve the model quality. More interestingly, BNDBMs can be trained successfully without pretraining, which is usually a mandatory step in most existing DBMs. The experimental results in MNIST, Fashion-MNIST and Caltech 101 Silhouette datasets show that our BNDBMs outperform DBMs and centered DBMs in terms of feature representation and classification accuracy ($3.98%$ and $5.84%$ average improvement for pretraining and no pretraining respectively).
APA
Vu, H., Nguyen, T.D., Le, T., Luo, W. & Phung, D.. (2018). Batch Normalized Deep Boltzmann Machines. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:359-374 Available from https://proceedings.mlr.press/v95/vu18a.html.

Related Material