A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer

Manfred K. Warmuth, Wojciech Kotłowski, Ehsan Amid
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:1214-1236, 2021.

Abstract

It was conjectured that any neural network of any structure and arbitrary differentiable transfer functions at the nodes cannot learn the following problem sample efficiently when trained with gradient descent: The instances are the rows of a $d$-dimensional Hadamard matrix and the target is one of the features, i.e. very sparse. We essentially prove this conjecture: We show that after receiving a random training set of size $k < d$, the expected squared loss is still $1-\frac{k}{(d-1)}$. The only requirement needed is that the input layer is fully connected and the initial weight vectors of the input nodes are chosen from a rotation invariant distribution. Surprisingly the same type of problem can be solved drastically more efficient by a simple 2-layer linear neural network in which the $d$ inputs are connected to the output node by chains of length 2 (Now the input layer has only one edge per input). When such a network is trained by gradient descent, then it has been shown that its expected squared loss is $\frac{\log d}{k}$. Our lower bounds essentially show that a sparse input layer is needed to sample efficiently learn sparse targets with gradient descent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-warmuth21a, title = {A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer}, author = {Warmuth, Manfred K. and Kot{\l}owski, Wojciech and Amid, Ehsan}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {1214--1236}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/warmuth21a/warmuth21a.pdf}, url = {https://proceedings.mlr.press/v132/warmuth21a.html}, abstract = {It was conjectured that any neural network of any structure and arbitrary differentiable transfer functions at the nodes cannot learn the following problem sample efficiently when trained with gradient descent: The instances are the rows of a $d$-dimensional Hadamard matrix and the target is one of the features, i.e. very sparse. We essentially prove this conjecture: We show that after receiving a random training set of size $k < d$, the expected squared loss is still $1-\frac{k}{(d-1)}$. The only requirement needed is that the input layer is fully connected and the initial weight vectors of the input nodes are chosen from a rotation invariant distribution. Surprisingly the same type of problem can be solved drastically more efficient by a simple 2-layer linear neural network in which the $d$ inputs are connected to the output node by chains of length 2 (Now the input layer has only one edge per input). When such a network is trained by gradient descent, then it has been shown that its expected squared loss is $\frac{\log d}{k}$. Our lower bounds essentially show that a sparse input layer is needed to sample efficiently learn sparse targets with gradient descent.} }
Endnote
%0 Conference Paper %T A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer %A Manfred K. Warmuth %A Wojciech Kotłowski %A Ehsan Amid %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-warmuth21a %I PMLR %P 1214--1236 %U https://proceedings.mlr.press/v132/warmuth21a.html %V 132 %X It was conjectured that any neural network of any structure and arbitrary differentiable transfer functions at the nodes cannot learn the following problem sample efficiently when trained with gradient descent: The instances are the rows of a $d$-dimensional Hadamard matrix and the target is one of the features, i.e. very sparse. We essentially prove this conjecture: We show that after receiving a random training set of size $k < d$, the expected squared loss is still $1-\frac{k}{(d-1)}$. The only requirement needed is that the input layer is fully connected and the initial weight vectors of the input nodes are chosen from a rotation invariant distribution. Surprisingly the same type of problem can be solved drastically more efficient by a simple 2-layer linear neural network in which the $d$ inputs are connected to the output node by chains of length 2 (Now the input layer has only one edge per input). When such a network is trained by gradient descent, then it has been shown that its expected squared loss is $\frac{\log d}{k}$. Our lower bounds essentially show that a sparse input layer is needed to sample efficiently learn sparse targets with gradient descent.
APA
Warmuth, M.K., Kotłowski, W. & Amid, E.. (2021). A case where a spindly two-layer linear network decisively outperforms any neural network with a fully connected input layer. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:1214-1236 Available from https://proceedings.mlr.press/v132/warmuth21a.html.

Related Material