[edit]
Accurate Shapley Values for explaining tree-based models
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2448-2465, 2022.
Abstract
Although Shapley Values (SV) are widely used in explainable AI, they can be poorly understood and estimated, implying that their analysis may lead to spurious inferences and explanations. As a starting point, we remind an invariance principle for SV and derive the correct approach for computing the SV of categorical variables that are particularly sensitive to the encoding used. In the case of tree-based models, we introduce two estimators of Shapley Values that exploit the tree structure efficiently and are more accurate than state-of-the-art methods. Simulations and comparisons are performed with state-of-the-art algorithms and show the practical gain of our approach. Finally, we discuss the ability of SV to provide reliable local explanations. We also provide a Python package that compute our estimators at https://github.com/salimamoukou/acv00.