Explaining the Explainer: A First Theoretical Analysis of LIME

Damien Garreau, Ulrike Luxburg
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1287-1296, 2020.

Abstract

Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the first theoretical analysis of LIME. We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear. The good news is that these coefficients are proportional to the gradient of the function to explain: LIME indeed discovers meaningful features. However, our analysis also reveals that poor choices of parameters can lead LIME to miss important features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-garreau20a, title = {Explaining the Explainer: A First Theoretical Analysis of LIME}, author = {Garreau, Damien and von Luxburg, Ulrike}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1287--1296}, year = {2020}, editor = {Silvia Chiappa and Roberto Calandra}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/garreau20a/garreau20a.pdf}, url = { http://proceedings.mlr.press/v108/garreau20a.html }, abstract = {Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the first theoretical analysis of LIME. We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear. The good news is that these coefficients are proportional to the gradient of the function to explain: LIME indeed discovers meaningful features. However, our analysis also reveals that poor choices of parameters can lead LIME to miss important features. } }
Endnote
%0 Conference Paper %T Explaining the Explainer: A First Theoretical Analysis of LIME %A Damien Garreau %A Ulrike Luxburg %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-garreau20a %I PMLR %P 1287--1296 %U http://proceedings.mlr.press/v108/garreau20a.html %V 108 %X Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the first theoretical analysis of LIME. We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear. The good news is that these coefficients are proportional to the gradient of the function to explain: LIME indeed discovers meaningful features. However, our analysis also reveals that poor choices of parameters can lead LIME to miss important features.
APA
Garreau, D. & Luxburg, U.. (2020). Explaining the Explainer: A First Theoretical Analysis of LIME. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1287-1296 Available from http://proceedings.mlr.press/v108/garreau20a.html .

Related Material