Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge

Laura Rieger, Chandan Singh, William Murdoch, Bin Yu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8116-8126, 2020.

Abstract

For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods to increase the predictive accuracy of a deep learning model. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by inserting domain knowledge into the model via explanations. We demonstrate the ability of CDEP to increase performance on an array of toy and real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-rieger20a, title = {Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge}, author = {Rieger, Laura and Singh, Chandan and Murdoch, William and Yu, Bin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8116--8126}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/rieger20a/rieger20a.pdf}, url = {https://proceedings.mlr.press/v119/rieger20a.html}, abstract = {For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods to increase the predictive accuracy of a deep learning model. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by inserting domain knowledge into the model via explanations. We demonstrate the ability of CDEP to increase performance on an array of toy and real datasets.} }
Endnote
%0 Conference Paper %T Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge %A Laura Rieger %A Chandan Singh %A William Murdoch %A Bin Yu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-rieger20a %I PMLR %P 8116--8126 %U https://proceedings.mlr.press/v119/rieger20a.html %V 119 %X For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods to increase the predictive accuracy of a deep learning model. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by inserting domain knowledge into the model via explanations. We demonstrate the ability of CDEP to increase performance on an array of toy and real datasets.
APA
Rieger, L., Singh, C., Murdoch, W. & Yu, B.. (2020). Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8116-8126 Available from https://proceedings.mlr.press/v119/rieger20a.html.

Related Material