Eliminating the Invariance on the Loss Landscape of Linear Autoencoders

Reza Oftadeh, Jiayi Shen, Zhangyang Wang, Dylan Shell
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7405-7413, 2020.

Abstract

This paper proposes a new loss function for linear autoencoders (LAEs) and analytically identifies the structure of the associated loss surface. Optimizing the conventional Mean Square Error (MSE) loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but, owing to an invariance that cancels out in the global map, it will fail to identify the exact eigenvectors. We show here that our proposed loss function eliminates this issue, so the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We characterize the full structure of the new loss landscape by establishing an analytical expression for the set of all critical points, showing that it is a subset of critical points of MSE, and that all local minima are still global. Specifically, the invariant global minima under MSE are shown to become saddle points under the new loss. Additionally, the computational complexity of the loss and its gradients are the same as MSE and, thus, the new loss is not only of theoretical importance but is of practical value, e.g., for low-rank approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-oftadeh20a, title = {Eliminating the Invariance on the Loss Landscape of Linear Autoencoders}, author = {Oftadeh, Reza and Shen, Jiayi and Wang, Zhangyang and Shell, Dylan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7405--7413}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/oftadeh20a/oftadeh20a.pdf}, url = {https://proceedings.mlr.press/v119/oftadeh20a.html}, abstract = {This paper proposes a new loss function for linear autoencoders (LAEs) and analytically identifies the structure of the associated loss surface. Optimizing the conventional Mean Square Error (MSE) loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but, owing to an invariance that cancels out in the global map, it will fail to identify the exact eigenvectors. We show here that our proposed loss function eliminates this issue, so the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We characterize the full structure of the new loss landscape by establishing an analytical expression for the set of all critical points, showing that it is a subset of critical points of MSE, and that all local minima are still global. Specifically, the invariant global minima under MSE are shown to become saddle points under the new loss. Additionally, the computational complexity of the loss and its gradients are the same as MSE and, thus, the new loss is not only of theoretical importance but is of practical value, e.g., for low-rank approximation.} }
Endnote
%0 Conference Paper %T Eliminating the Invariance on the Loss Landscape of Linear Autoencoders %A Reza Oftadeh %A Jiayi Shen %A Zhangyang Wang %A Dylan Shell %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-oftadeh20a %I PMLR %P 7405--7413 %U https://proceedings.mlr.press/v119/oftadeh20a.html %V 119 %X This paper proposes a new loss function for linear autoencoders (LAEs) and analytically identifies the structure of the associated loss surface. Optimizing the conventional Mean Square Error (MSE) loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but, owing to an invariance that cancels out in the global map, it will fail to identify the exact eigenvectors. We show here that our proposed loss function eliminates this issue, so the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We characterize the full structure of the new loss landscape by establishing an analytical expression for the set of all critical points, showing that it is a subset of critical points of MSE, and that all local minima are still global. Specifically, the invariant global minima under MSE are shown to become saddle points under the new loss. Additionally, the computational complexity of the loss and its gradients are the same as MSE and, thus, the new loss is not only of theoretical importance but is of practical value, e.g., for low-rank approximation.
APA
Oftadeh, R., Shen, J., Wang, Z. & Shell, D.. (2020). Eliminating the Invariance on the Loss Landscape of Linear Autoencoders. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7405-7413 Available from https://proceedings.mlr.press/v119/oftadeh20a.html.

Related Material