Loss Landscapes of Regularized Linear Autoencoders

Daniel Kunin, Jonathan Bloom, Aleksandrina Goeva, Cotton Seed
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3560-3569, 2019.

Abstract

Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kunin19a, title = {Loss Landscapes of Regularized Linear Autoencoders}, author = {Kunin, Daniel and Bloom, Jonathan and Goeva, Aleksandrina and Seed, Cotton}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3560--3569}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kunin19a/kunin19a.pdf}, url = {https://proceedings.mlr.press/v97/kunin19a.html}, abstract = {Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that $L_2$-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.} }
Endnote
%0 Conference Paper %T Loss Landscapes of Regularized Linear Autoencoders %A Daniel Kunin %A Jonathan Bloom %A Aleksandrina Goeva %A Cotton Seed %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kunin19a %I PMLR %P 3560--3569 %U https://proceedings.mlr.press/v97/kunin19a.html %V 97 %X Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that $L_2$-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.
APA
Kunin, D., Bloom, J., Goeva, A. & Seed, C.. (2019). Loss Landscapes of Regularized Linear Autoencoders. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3560-3569 Available from https://proceedings.mlr.press/v97/kunin19a.html.

Related Material