Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

Itay Safran, Ohad Shamir
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2979-2987, 2017.

Abstract

We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the $L_1$ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-safran17a, title = {Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks}, author = {Itay Safran and Ohad Shamir}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2979--2987}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/safran17a/safran17a.pdf}, url = {https://proceedings.mlr.press/v70/safran17a.html}, abstract = {We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the $L_1$ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.} }
Endnote
%0 Conference Paper %T Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks %A Itay Safran %A Ohad Shamir %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-safran17a %I PMLR %P 2979--2987 %U https://proceedings.mlr.press/v70/safran17a.html %V 70 %X We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the $L_1$ norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
APA
Safran, I. & Shamir, O.. (2017). Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2979-2987 Available from https://proceedings.mlr.press/v70/safran17a.html.

Related Material