Autoencoders, Unsupervised Learning, and Deep Architectures

Pierre Baldi
Proceedings of ICML Workshop on Unsupervised and Transfer Learning, PMLR 27:37-49, 2012.

Abstract

Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v27-baldi12a, title = {Autoencoders, Unsupervised Learning, and Deep Architectures}, author = {Baldi, Pierre}, booktitle = {Proceedings of ICML Workshop on Unsupervised and Transfer Learning}, pages = {37--49}, year = {2012}, editor = {Guyon, Isabelle and Dror, Gideon and Lemaire, Vincent and Taylor, Graham and Silver, Daniel}, volume = {27}, series = {Proceedings of Machine Learning Research}, address = {Bellevue, Washington, USA}, month = {02 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v27/baldi12a/baldi12a.pdf}, url = {https://proceedings.mlr.press/v27/baldi12a.html}, abstract = {Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.} }
Endnote
%0 Conference Paper %T Autoencoders, Unsupervised Learning, and Deep Architectures %A Pierre Baldi %B Proceedings of ICML Workshop on Unsupervised and Transfer Learning %C Proceedings of Machine Learning Research %D 2012 %E Isabelle Guyon %E Gideon Dror %E Vincent Lemaire %E Graham Taylor %E Daniel Silver %F pmlr-v27-baldi12a %I PMLR %P 37--49 %U https://proceedings.mlr.press/v27/baldi12a.html %V 27 %X Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory.
RIS
TY - CPAPER TI - Autoencoders, Unsupervised Learning, and Deep Architectures AU - Pierre Baldi BT - Proceedings of ICML Workshop on Unsupervised and Transfer Learning DA - 2012/06/27 ED - Isabelle Guyon ED - Gideon Dror ED - Vincent Lemaire ED - Graham Taylor ED - Daniel Silver ID - pmlr-v27-baldi12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 27 SP - 37 EP - 49 L1 - http://proceedings.mlr.press/v27/baldi12a/baldi12a.pdf UR - https://proceedings.mlr.press/v27/baldi12a.html AB - Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders. The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder. Learning in the Boolean autoencoder is equivalent to a clustering problem that can be solved in polynomial time when the number of clusters is small and becomes NP complete when the number of clusters is large. The framework sheds light on the different kinds of autoencoders, their learning complexity, their horizontal and vertical composability in deep architectures, their critical points, and their fundamental connections to clustering, Hebbian learning, and information theory. ER -
APA
Baldi, P.. (2012). Autoencoders, Unsupervised Learning, and Deep Architectures. Proceedings of ICML Workshop on Unsupervised and Transfer Learning, in Proceedings of Machine Learning Research 27:37-49 Available from https://proceedings.mlr.press/v27/baldi12a.html.

Related Material