Bounds on the Approximation Power of Feedforward Neural Networks

Mohammad Mehrabi, Aslan Tchamkerten, MANSOOR YOUSEFI
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3453-3461, 2018.

Abstract

The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-mehrabi18a, title = {Bounds on the Approximation Power of Feedforward Neural Networks}, author = {Mehrabi, Mohammad and Tchamkerten, Aslan and YOUSEFI, MANSOOR}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3453--3461}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/mehrabi18a/mehrabi18a.pdf}, url = {https://proceedings.mlr.press/v80/mehrabi18a.html}, abstract = {The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.} }
Endnote
%0 Conference Paper %T Bounds on the Approximation Power of Feedforward Neural Networks %A Mohammad Mehrabi %A Aslan Tchamkerten %A MANSOOR YOUSEFI %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-mehrabi18a %I PMLR %P 3453--3461 %U https://proceedings.mlr.press/v80/mehrabi18a.html %V 80 %X The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.
APA
Mehrabi, M., Tchamkerten, A. & YOUSEFI, M.. (2018). Bounds on the Approximation Power of Feedforward Neural Networks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3453-3461 Available from https://proceedings.mlr.press/v80/mehrabi18a.html.

Related Material