Problems with Shapley-value-based explanations as feature importance measures

I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5491-5500, 2020.

Abstract

Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game’s unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values do not provide explanations which suit human-centric goals of explainability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-kumar20e, title = {Problems with Shapley-value-based explanations as feature importance measures}, author = {Kumar, I. Elizabeth and Venkatasubramanian, Suresh and Scheidegger, Carlos and Friedler, Sorelle}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5491--5500}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/kumar20e/kumar20e.pdf}, url = {https://proceedings.mlr.press/v119/kumar20e.html}, abstract = {Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game’s unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values do not provide explanations which suit human-centric goals of explainability.} }
Endnote
%0 Conference Paper %T Problems with Shapley-value-based explanations as feature importance measures %A I. Elizabeth Kumar %A Suresh Venkatasubramanian %A Carlos Scheidegger %A Sorelle Friedler %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-kumar20e %I PMLR %P 5491--5500 %U https://proceedings.mlr.press/v119/kumar20e.html %V 119 %X Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using some form of the game’s unique Shapley values. Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations. We show that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning. We also draw on additional literature to argue that Shapley values do not provide explanations which suit human-centric goals of explainability.
APA
Kumar, I.E., Venkatasubramanian, S., Scheidegger, C. & Friedler, S.. (2020). Problems with Shapley-value-based explanations as feature importance measures. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5491-5500 Available from https://proceedings.mlr.press/v119/kumar20e.html.

Related Material