On the Dynamics of Gradient Descent for Autoencoders
[edit]
Proceedings of Machine Learning Research, PMLR 89:28582867, 2019.
Abstract
We provide a series of results for unsupervised learning with autoencoders. Specifically, we study shallow twolayer autoencoder architectures with shared weights. We focus on three generative models for data that are common in statistical machine learning: (i) the mixtureofgaussians model, (ii) the sparse coding model, and (iii) the sparsity model with nonnegative coefficients. For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model. To our knowledge, this is the first result that rigorously studies the dynamics of gradient descent for weightsharing autoencoders. Our analysis can be viewed as theoretical evidence that shallow autoencoder modules indeed can be used as feature learning mechanisms for a variety of data models, and may shed insight on how to train larger stacked architectures with autoencoders as basic building blocks.
Related Material


