Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods

Aleksandr Shevchenko, Kevin Kögler, Hamed Hassani, Marco Mondelli
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31151-31209, 2023.

Abstract

Autoencoders are a popular model in many branches of machine learning and lossy data compression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly understood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanishing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders trained in the challenging proportional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise description of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shevchenko23a, title = {Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods}, author = {Shevchenko, Aleksandr and K\"{o}gler, Kevin and Hassani, Hamed and Mondelli, Marco}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31151--31209}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shevchenko23a/shevchenko23a.pdf}, url = {https://proceedings.mlr.press/v202/shevchenko23a.html}, abstract = {Autoencoders are a popular model in many branches of machine learning and lossy data compression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly understood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanishing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders trained in the challenging proportional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise description of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.} }
Endnote
%0 Conference Paper %T Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods %A Aleksandr Shevchenko %A Kevin Kögler %A Hamed Hassani %A Marco Mondelli %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shevchenko23a %I PMLR %P 31151--31209 %U https://proceedings.mlr.press/v202/shevchenko23a.html %V 202 %X Autoencoders are a popular model in many branches of machine learning and lossy data compression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly understood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanishing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders trained in the challenging proportional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise description of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.
APA
Shevchenko, A., Kögler, K., Hassani, H. & Mondelli, M.. (2023). Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31151-31209 Available from https://proceedings.mlr.press/v202/shevchenko23a.html.

Related Material