Learning Onehiddenlayer ReLU Networks via Gradient Descent
[edit]
Proceedings of Machine Learning Research, PMLR 89:15241534, 2019.
Abstract
We study the problem of learning onehiddenlayer neural networks with Rectified Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithmdependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the groundtruth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work characterizing the recovery guarantee for practical learning of onehiddenlayer ReLU networks with multiple neurons. Numerical experiments verify our theoretical findings.
Related Material


