Deep Learning using Robust Interdependent Codes

Hugo Larochelle, Dumitru Erhan, Pascal Vincent
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:312-319, 2009.

Abstract

We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder of (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-larochelle09a, title = {Deep Learning using Robust Interdependent Codes}, author = {Larochelle, Hugo and Erhan, Dumitru and Vincent, Pascal}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {312--319}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/larochelle09a/larochelle09a.pdf}, url = {https://proceedings.mlr.press/v5/larochelle09a.html}, abstract = {We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder of (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.} }
Endnote
%0 Conference Paper %T Deep Learning using Robust Interdependent Codes %A Hugo Larochelle %A Dumitru Erhan %A Pascal Vincent %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-larochelle09a %I PMLR %P 312--319 %U https://proceedings.mlr.press/v5/larochelle09a.html %V 5 %X We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder of (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.
RIS
TY - CPAPER TI - Deep Learning using Robust Interdependent Codes AU - Hugo Larochelle AU - Dumitru Erhan AU - Pascal Vincent BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-larochelle09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 312 EP - 319 L1 - http://proceedings.mlr.press/v5/larochelle09a/larochelle09a.pdf UR - https://proceedings.mlr.press/v5/larochelle09a.html AB - We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder of (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks. ER -
APA
Larochelle, H., Erhan, D. & Vincent, P.. (2009). Deep Learning using Robust Interdependent Codes. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:312-319 Available from https://proceedings.mlr.press/v5/larochelle09a.html.

Related Material