[edit]

# Neural network integral representations with the ReLU activation function

*Proceedings of The First Mathematical and Scientific Machine Learning Conference*, PMLR 107:128-143, 2020.

#### Abstract

In this effort, we derive a formula for the integral representation of a shallow neural network with the ReLU activation function. We assume that the outer weighs admit a finite $L_1$-norm with respect to Lebesgue measure on the sphere. For univariate target functions we further provide a closed-form formula for all possible representations. Additionally, in this case our formula allows one to explicitly solve the least $L_1$-norm neural network representation for a given function.