Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean

Anton Frederik Thielmann, René-Marcel Kruse, Thomas Kneib, Benjamin Säfken
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1783-1791, 2024.

Abstract

Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks, making them the go-to method for problems requiring high-level predictive power. Despite this success, the inner workings of DNNs are often not transparent, making them difficult to interpret or understand. This lack of interpretability has led to increased research on inherently interpretable neural networks in recent years. Models such as Neural Additive Models (NAMs) achieve visual interpretability through the combination of classical statistical methods with DNNs. However, these approaches only concentrate on mean response predictions, leaving out other properties of the response distribution of the underlying data. We propose Neural Additive Models for Location Scale and Shape (NAMLSS), a modelling framework that combines the predictive power of classical deep learning models with the inherent advantages of distributional regression while maintaining the interpretability of additive models. The code is available at the following link: \url{https://github.com/AnFreTh/NAMpy}

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-frederik-thielmann24a, title = { Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean }, author = {Frederik Thielmann, Anton and Kruse, Ren\'{e}-Marcel and Kneib, Thomas and S\"{a}fken, Benjamin}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1783--1791}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/frederik-thielmann24a/frederik-thielmann24a.pdf}, url = {https://proceedings.mlr.press/v238/frederik-thielmann24a.html}, abstract = { Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks, making them the go-to method for problems requiring high-level predictive power. Despite this success, the inner workings of DNNs are often not transparent, making them difficult to interpret or understand. This lack of interpretability has led to increased research on inherently interpretable neural networks in recent years. Models such as Neural Additive Models (NAMs) achieve visual interpretability through the combination of classical statistical methods with DNNs. However, these approaches only concentrate on mean response predictions, leaving out other properties of the response distribution of the underlying data. We propose Neural Additive Models for Location Scale and Shape (NAMLSS), a modelling framework that combines the predictive power of classical deep learning models with the inherent advantages of distributional regression while maintaining the interpretability of additive models. The code is available at the following link: \url{https://github.com/AnFreTh/NAMpy} } }
Endnote
%0 Conference Paper %T Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean %A Anton Frederik Thielmann %A René-Marcel Kruse %A Thomas Kneib %A Benjamin Säfken %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-frederik-thielmann24a %I PMLR %P 1783--1791 %U https://proceedings.mlr.press/v238/frederik-thielmann24a.html %V 238 %X Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks, making them the go-to method for problems requiring high-level predictive power. Despite this success, the inner workings of DNNs are often not transparent, making them difficult to interpret or understand. This lack of interpretability has led to increased research on inherently interpretable neural networks in recent years. Models such as Neural Additive Models (NAMs) achieve visual interpretability through the combination of classical statistical methods with DNNs. However, these approaches only concentrate on mean response predictions, leaving out other properties of the response distribution of the underlying data. We propose Neural Additive Models for Location Scale and Shape (NAMLSS), a modelling framework that combines the predictive power of classical deep learning models with the inherent advantages of distributional regression while maintaining the interpretability of additive models. The code is available at the following link: \url{https://github.com/AnFreTh/NAMpy}
APA
Frederik Thielmann, A., Kruse, R., Kneib, T. & Säfken, B.. (2024). Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1783-1791 Available from https://proceedings.mlr.press/v238/frederik-thielmann24a.html.

Related Material