Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations

Davide Sartor, Alberto Sinigaglia, Gian Antonio Susto
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:52995-53018, 2025.

Abstract

Imposing input-output constraints in multi-layer perceptrons (MLPs) plays a pivotal role in many real world applications. Monotonicity in particular is a common requirement in applications that need transparent and robust machine learning models. Conventional techniques for imposing monotonicity in MLPs by construction involve the use of non-negative weight constraints and bounded activation functions, which poses well known optimization challenges. In this work, we generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, we show an equivalence between saturation side in the activations and sign of the weight constraint. This connection allows us to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts. This results provide theoretical grounding to the empirical effectiveness observed in previous works, while leading to possible architectural simplification. Moreover, to further alleviate the optimization difficulties, we propose an alternative formulation that allows the network to adjust its activations according to the sign of the weights. This eliminates the requirement for weight reparameterization, easing initialization and improving training stability. Experimental evaluation reinforce the validity of the theoretical results, showing that our novel approach compares favorably to traditional monotonic architectures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sartor25a, title = {Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations}, author = {Sartor, Davide and Sinigaglia, Alberto and Susto, Gian Antonio}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {52995--53018}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sartor25a/sartor25a.pdf}, url = {https://proceedings.mlr.press/v267/sartor25a.html}, abstract = {Imposing input-output constraints in multi-layer perceptrons (MLPs) plays a pivotal role in many real world applications. Monotonicity in particular is a common requirement in applications that need transparent and robust machine learning models. Conventional techniques for imposing monotonicity in MLPs by construction involve the use of non-negative weight constraints and bounded activation functions, which poses well known optimization challenges. In this work, we generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, we show an equivalence between saturation side in the activations and sign of the weight constraint. This connection allows us to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts. This results provide theoretical grounding to the empirical effectiveness observed in previous works, while leading to possible architectural simplification. Moreover, to further alleviate the optimization difficulties, we propose an alternative formulation that allows the network to adjust its activations according to the sign of the weights. This eliminates the requirement for weight reparameterization, easing initialization and improving training stability. Experimental evaluation reinforce the validity of the theoretical results, showing that our novel approach compares favorably to traditional monotonic architectures.} }
Endnote
%0 Conference Paper %T Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations %A Davide Sartor %A Alberto Sinigaglia %A Gian Antonio Susto %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sartor25a %I PMLR %P 52995--53018 %U https://proceedings.mlr.press/v267/sartor25a.html %V 267 %X Imposing input-output constraints in multi-layer perceptrons (MLPs) plays a pivotal role in many real world applications. Monotonicity in particular is a common requirement in applications that need transparent and robust machine learning models. Conventional techniques for imposing monotonicity in MLPs by construction involve the use of non-negative weight constraints and bounded activation functions, which poses well known optimization challenges. In this work, we generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, we show an equivalence between saturation side in the activations and sign of the weight constraint. This connection allows us to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts. This results provide theoretical grounding to the empirical effectiveness observed in previous works, while leading to possible architectural simplification. Moreover, to further alleviate the optimization difficulties, we propose an alternative formulation that allows the network to adjust its activations according to the sign of the weights. This eliminates the requirement for weight reparameterization, easing initialization and improving training stability. Experimental evaluation reinforce the validity of the theoretical results, showing that our novel approach compares favorably to traditional monotonic architectures.
APA
Sartor, D., Sinigaglia, A. & Susto, G.A.. (2025). Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:52995-53018 Available from https://proceedings.mlr.press/v267/sartor25a.html.

Related Material