Interpreting Black Box Predictions using Fisher Kernels

Rajiv Khanna, Been Kim, Joydeep Ghosh, Sanmi Koyejo
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:3382-3390, 2019.

Abstract

Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask “which training examples are most responsible for a given set of predictions”? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-khanna19a, title = {Interpreting Black Box Predictions using Fisher Kernels}, author = {Khanna, Rajiv and Kim, Been and Ghosh, Joydeep and Koyejo, Sanmi}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {3382--3390}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/khanna19a/khanna19a.pdf}, url = {https://proceedings.mlr.press/v89/khanna19a.html}, abstract = {Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask “which training examples are most responsible for a given set of predictions”? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.} }
Endnote
%0 Conference Paper %T Interpreting Black Box Predictions using Fisher Kernels %A Rajiv Khanna %A Been Kim %A Joydeep Ghosh %A Sanmi Koyejo %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-khanna19a %I PMLR %P 3382--3390 %U https://proceedings.mlr.press/v89/khanna19a.html %V 89 %X Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask “which training examples are most responsible for a given set of predictions”? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.
APA
Khanna, R., Kim, B., Ghosh, J. & Koyejo, S.. (2019). Interpreting Black Box Predictions using Fisher Kernels. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:3382-3390 Available from https://proceedings.mlr.press/v89/khanna19a.html.

Related Material