Elementary superexpressive activations

Dmitry Yarotsky
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11932-11940, 2021.

Abstract

We call a finite family of activation functions \emph{superexpressive} if any multivariate continuous function can be approximated by a neural network that uses these activations and has a fixed architecture only depending on the number of input variables (i.e., to achieve any accuracy we only need to adjust the weights, without increasing the number of neurons). Previously, it was known that superexpressive activations exist, but their form was quite complex. We give examples of very simple superexpressive families: for example, we prove that the family $\{sin, arcsin\}$ is superexpressive. We also show that most practical activations (not involving periodic functions) are not superexpressive.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-yarotsky21a, title = {Elementary superexpressive activations}, author = {Yarotsky, Dmitry}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11932--11940}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/yarotsky21a/yarotsky21a.pdf}, url = {https://proceedings.mlr.press/v139/yarotsky21a.html}, abstract = {We call a finite family of activation functions \emph{superexpressive} if any multivariate continuous function can be approximated by a neural network that uses these activations and has a fixed architecture only depending on the number of input variables (i.e., to achieve any accuracy we only need to adjust the weights, without increasing the number of neurons). Previously, it was known that superexpressive activations exist, but their form was quite complex. We give examples of very simple superexpressive families: for example, we prove that the family $\{sin, arcsin\}$ is superexpressive. We also show that most practical activations (not involving periodic functions) are not superexpressive.} }
Endnote
%0 Conference Paper %T Elementary superexpressive activations %A Dmitry Yarotsky %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-yarotsky21a %I PMLR %P 11932--11940 %U https://proceedings.mlr.press/v139/yarotsky21a.html %V 139 %X We call a finite family of activation functions \emph{superexpressive} if any multivariate continuous function can be approximated by a neural network that uses these activations and has a fixed architecture only depending on the number of input variables (i.e., to achieve any accuracy we only need to adjust the weights, without increasing the number of neurons). Previously, it was known that superexpressive activations exist, but their form was quite complex. We give examples of very simple superexpressive families: for example, we prove that the family $\{sin, arcsin\}$ is superexpressive. We also show that most practical activations (not involving periodic functions) are not superexpressive.
APA
Yarotsky, D.. (2021). Elementary superexpressive activations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11932-11940 Available from https://proceedings.mlr.press/v139/yarotsky21a.html.

Related Material