DepthWidth Tradeoffs in Approximating Natural Functions with Neural Networks
[edit]
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:29792987, 2017.
Abstract
We provide several new depthbased separation results for feedforward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; nonlinear functions which are radial with respect to the $L_1$ norm; and smooth nonlinear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
Related Material


