How to address monotonicity for model risk management?

Dangxing Chen, Weicheng Ye
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:5282-5295, 2023.

Abstract

In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-chen23al, title = {How to address monotonicity for model risk management?}, author = {Chen, Dangxing and Ye, Weicheng}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {5282--5295}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/chen23al/chen23al.pdf}, url = {https://proceedings.mlr.press/v202/chen23al.html}, abstract = {In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.} }
Endnote
%0 Conference Paper %T How to address monotonicity for model risk management? %A Dangxing Chen %A Weicheng Ye %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-chen23al %I PMLR %P 5282--5295 %U https://proceedings.mlr.press/v202/chen23al.html %V 202 %X In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.
APA
Chen, D. & Ye, W.. (2023). How to address monotonicity for model risk management?. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:5282-5295 Available from https://proceedings.mlr.press/v202/chen23al.html.

Related Material