RelatIF: Identifying Explanatory Training Samples via Relative Influence

Elnaz Barshan, Marc-Etienne Brunet, Gintare Karolina Dziugaite
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1899-1909, 2020.

Abstract

In this work, we focus on the use of influence functions to identify relevant training examples that one might hope “explain” the predictions of a machine learning model. One shortcoming of influence functions is that the training examples deemed most “influential” are often outliers or mislabelled, making them poor choices for explanation. In order to address this shortcoming, we separate the role of global versus local influence. We introduce RelatIF, a new class of criteria for choosing relevant training examples by way of an optimization objective that places a constraint on global influence. RelatIF considers the local influence that an explanatory example has on a prediction relative to its global effects on the model. In empirical evaluations, we find that the examples returned by RelatIF are more intuitive when compared to those found using influence functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-barshan20a, title = {RelatIF: Identifying Explanatory Training Samples via Relative Influence}, author = {Barshan, Elnaz and Brunet, Marc-Etienne and Dziugaite, Gintare Karolina}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1899--1909}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/barshan20a/barshan20a.pdf}, url = {https://proceedings.mlr.press/v108/barshan20a.html}, abstract = {In this work, we focus on the use of influence functions to identify relevant training examples that one might hope “explain” the predictions of a machine learning model. One shortcoming of influence functions is that the training examples deemed most “influential” are often outliers or mislabelled, making them poor choices for explanation. In order to address this shortcoming, we separate the role of global versus local influence. We introduce RelatIF, a new class of criteria for choosing relevant training examples by way of an optimization objective that places a constraint on global influence. RelatIF considers the local influence that an explanatory example has on a prediction relative to its global effects on the model. In empirical evaluations, we find that the examples returned by RelatIF are more intuitive when compared to those found using influence functions.} }
Endnote
%0 Conference Paper %T RelatIF: Identifying Explanatory Training Samples via Relative Influence %A Elnaz Barshan %A Marc-Etienne Brunet %A Gintare Karolina Dziugaite %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-barshan20a %I PMLR %P 1899--1909 %U https://proceedings.mlr.press/v108/barshan20a.html %V 108 %X In this work, we focus on the use of influence functions to identify relevant training examples that one might hope “explain” the predictions of a machine learning model. One shortcoming of influence functions is that the training examples deemed most “influential” are often outliers or mislabelled, making them poor choices for explanation. In order to address this shortcoming, we separate the role of global versus local influence. We introduce RelatIF, a new class of criteria for choosing relevant training examples by way of an optimization objective that places a constraint on global influence. RelatIF considers the local influence that an explanatory example has on a prediction relative to its global effects on the model. In empirical evaluations, we find that the examples returned by RelatIF are more intuitive when compared to those found using influence functions.
APA
Barshan, E., Brunet, M. & Dziugaite, G.K.. (2020). RelatIF: Identifying Explanatory Training Samples via Relative Influence. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1899-1909 Available from https://proceedings.mlr.press/v108/barshan20a.html.

Related Material