Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:14701499, 2019.
Abstract
We give a polynomialtime algorithm for learning neural networks with one layer of sigmoids feeding into any Lipschitz, monotone activation function (e.g., sigmoid or ReLU). The algorithm succeeds with respect to {\em any} distribution on the unit ball in $n$ dimensions (hidden weight vectors in the first layer have unit norm). This is the first efficient algorithm for learning a general class of neural networks with more than one nonlinear layer that makes no restrictions on the VCdimension of the network. Algorithms for learning relaxations of our model (e.g., allowing larger weight vectors in the first layer) would lead to breakthroughs on notoriously hard problems in Boolean function learning. Thus, our results are “best possible” with respect to current techniques. Our algorithm– {\em Alphatron}– is an iterative update rule that combines isotonic regression with kernel methods. We use this algorithm to give a simple reduction for translating PAC learning algorithms to the more general, realvalued setting of {\em probabilistic concepts}, a model that (unlike PAC learning) requires noni.i.d. noisetolerance. This substantially improves many longstanding results for PAC learning Boolean functions.
Related Material


