Neural network integral representations with the ReLU activation function

Armenak Petrosyan, Anton Dereventsov, Clayton G. Webster
Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR 107:128-143, 2020.

Abstract

In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function. We assume that the outer weighs admit a finite $L_1$-norm with respect to Lebesgue measure on the sphere. For univariate target functions we further provide a closed-form formula for all possible representations. Additionally, in this case our formula allows one to explicitly solve the least $L_1$-norm neural network representation for a given function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v107-petrosyan20a, title = {{Neural network integral representations with the ReLU activation function}}, author = {Petrosyan, Armenak and Dereventsov, Anton and Webster, Clayton G.}, booktitle = {Proceedings of The First Mathematical and Scientific Machine Learning Conference}, pages = {128--143}, year = {2020}, editor = {Lu, Jianfeng and Ward, Rachel}, volume = {107}, series = {Proceedings of Machine Learning Research}, month = {20--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v107/petrosyan20a/petrosyan20a.pdf}, url = {https://proceedings.mlr.press/v107/petrosyan20a.html}, abstract = {In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function. We assume that the outer weighs admit a finite $L_1$-norm with respect to Lebesgue measure on the sphere. For univariate target functions we further provide a closed-form formula for all possible representations. Additionally, in this case our formula allows one to explicitly solve the least $L_1$-norm neural network representation for a given function. } }
Endnote
%0 Conference Paper %T Neural network integral representations with the ReLU activation function %A Armenak Petrosyan %A Anton Dereventsov %A Clayton G. Webster %B Proceedings of The First Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2020 %E Jianfeng Lu %E Rachel Ward %F pmlr-v107-petrosyan20a %I PMLR %P 128--143 %U https://proceedings.mlr.press/v107/petrosyan20a.html %V 107 %X In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function. We assume that the outer weighs admit a finite $L_1$-norm with respect to Lebesgue measure on the sphere. For univariate target functions we further provide a closed-form formula for all possible representations. Additionally, in this case our formula allows one to explicitly solve the least $L_1$-norm neural network representation for a given function.
APA
Petrosyan, A., Dereventsov, A. & Webster, C.G.. (2020). Neural network integral representations with the ReLU activation function. Proceedings of The First Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 107:128-143 Available from https://proceedings.mlr.press/v107/petrosyan20a.html.

Related Material