Explainability as statistical inference

Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:30584-30612, 2023.

Abstract

A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model’s parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture, and any type of prediction problem. Our model is akin to amortized interpretability methods, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularized maximum likelihood for our general model. Using our framework, we identify imputation as a common issue of these models. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map and show experimentally that multiple imputation provides more reasonable interpretations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-senetaire23a, title = {Explainability as statistical inference}, author = {Senetaire, Hugo Henri Joseph and Garreau, Damien and Frellsen, Jes and Mattei, Pierre-Alexandre}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {30584--30612}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/senetaire23a/senetaire23a.pdf}, url = {https://proceedings.mlr.press/v202/senetaire23a.html}, abstract = {A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model’s parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture, and any type of prediction problem. Our model is akin to amortized interpretability methods, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularized maximum likelihood for our general model. Using our framework, we identify imputation as a common issue of these models. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map and show experimentally that multiple imputation provides more reasonable interpretations.} }
Endnote
%0 Conference Paper %T Explainability as statistical inference %A Hugo Henri Joseph Senetaire %A Damien Garreau %A Jes Frellsen %A Pierre-Alexandre Mattei %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-senetaire23a %I PMLR %P 30584--30612 %U https://proceedings.mlr.press/v202/senetaire23a.html %V 202 %X A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model’s parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture, and any type of prediction problem. Our model is akin to amortized interpretability methods, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularized maximum likelihood for our general model. Using our framework, we identify imputation as a common issue of these models. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map and show experimentally that multiple imputation provides more reasonable interpretations.
APA
Senetaire, H.H.J., Garreau, D., Frellsen, J. & Mattei, P.. (2023). Explainability as statistical inference. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:30584-30612 Available from https://proceedings.mlr.press/v202/senetaire23a.html.

Related Material