Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs

Andrey Kofnov, Daniel Kapla, Ezio Bartocci, Efstathia Bura
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31133-31157, 2025.

Abstract

We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network (NN) over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise twice continuously differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN-based bounds are then used to derive the upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kofnov25a, title = {Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs}, author = {Kofnov, Andrey and Kapla, Daniel and Bartocci, Ezio and Bura, Efstathia}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {31133--31157}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kofnov25a/kofnov25a.pdf}, url = {https://proceedings.mlr.press/v267/kofnov25a.html}, abstract = {We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network (NN) over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise twice continuously differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN-based bounds are then used to derive the upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.} }
Endnote
%0 Conference Paper %T Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs %A Andrey Kofnov %A Daniel Kapla %A Ezio Bartocci %A Efstathia Bura %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kofnov25a %I PMLR %P 31133--31157 %U https://proceedings.mlr.press/v267/kofnov25a.html %V 267 %X We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network (NN) over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise twice continuously differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN-based bounds are then used to derive the upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.
APA
Kofnov, A., Kapla, D., Bartocci, E. & Bura, E.. (2025). Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:31133-31157 Available from https://proceedings.mlr.press/v267/kofnov25a.html.

Related Material