On the Relationship Between Explanation and Prediction: A Causal View

Amir-Hossein Karimi, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, Been Kim
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15861-15883, 2023.

Abstract

Being able to provide explanations for a model’s decision has become a central requirement for the development, deployment, and adoption of machine learning models. However, we are yet to understand what explanation methods can and cannot do. How do upstream factors such as data, model prediction, hyperparameters, and random initialization influence downstream explanations? While previous work raised concerns that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we study the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors, i.e., on hyperparameters and inputs used to generate saliency-based Es or Ys. Our results suggest that the relationships between E and Y is far from ideal. In fact, the gap between ’ideal’ case only increase in higher-performing models — models that are likely to be deployed. Our work is a promising first step towards providing a quantitative measure of the relationship between E and Y, which could also inform the future development of methods for E with a quantitative metric.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-karimi23a, title = {On the Relationship Between Explanation and Prediction: A Causal View}, author = {Karimi, Amir-Hossein and Muandet, Krikamol and Kornblith, Simon and Sch\"{o}lkopf, Bernhard and Kim, Been}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {15861--15883}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/karimi23a/karimi23a.pdf}, url = {https://proceedings.mlr.press/v202/karimi23a.html}, abstract = {Being able to provide explanations for a model’s decision has become a central requirement for the development, deployment, and adoption of machine learning models. However, we are yet to understand what explanation methods can and cannot do. How do upstream factors such as data, model prediction, hyperparameters, and random initialization influence downstream explanations? While previous work raised concerns that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we study the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors, i.e., on hyperparameters and inputs used to generate saliency-based Es or Ys. Our results suggest that the relationships between E and Y is far from ideal. In fact, the gap between ’ideal’ case only increase in higher-performing models — models that are likely to be deployed. Our work is a promising first step towards providing a quantitative measure of the relationship between E and Y, which could also inform the future development of methods for E with a quantitative metric.} }
Endnote
%0 Conference Paper %T On the Relationship Between Explanation and Prediction: A Causal View %A Amir-Hossein Karimi %A Krikamol Muandet %A Simon Kornblith %A Bernhard Schölkopf %A Been Kim %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-karimi23a %I PMLR %P 15861--15883 %U https://proceedings.mlr.press/v202/karimi23a.html %V 202 %X Being able to provide explanations for a model’s decision has become a central requirement for the development, deployment, and adoption of machine learning models. However, we are yet to understand what explanation methods can and cannot do. How do upstream factors such as data, model prediction, hyperparameters, and random initialization influence downstream explanations? While previous work raised concerns that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we study the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors, i.e., on hyperparameters and inputs used to generate saliency-based Es or Ys. Our results suggest that the relationships between E and Y is far from ideal. In fact, the gap between ’ideal’ case only increase in higher-performing models — models that are likely to be deployed. Our work is a promising first step towards providing a quantitative measure of the relationship between E and Y, which could also inform the future development of methods for E with a quantitative metric.
APA
Karimi, A., Muandet, K., Kornblith, S., Schölkopf, B. & Kim, B.. (2023). On the Relationship Between Explanation and Prediction: A Causal View. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:15861-15883 Available from https://proceedings.mlr.press/v202/karimi23a.html.

Related Material