Two-stage holistic and contrastive explanation of image classification

Weiyan Xie, Xiao-Hui Li, Zhi Lin, Leonard K. M. Poon, Caleb Chen Cao, Nevin L. Zhang
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2335-2345, 2023.

Abstract

The need to explain the output of a deep neural network classifier is now widely recognized. While previous methods typically explain a single class in the output, we advocate explaining the whole output, which is a probability distribution over multiple classes. A whole-output explanation can help a human user gain an overall understanding of model behaviour instead of only one aspect of it. It can also provide a natural framework where one can examine the evidence used to discriminate between competing classes, and thereby obtain contrastive explanations. In this paper, we propose a contrastive whole-output explanation (CWOX) method for image classification, and evaluate it using quantitative metrics and through human subject studies. The source code of CWOX is available at https://github.com/vaynexie/CWOX.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-xie23a, title = {Two-stage holistic and contrastive explanation of image classification}, author = {Xie, Weiyan and Li, Xiao-Hui and Lin, Zhi and Poon, Leonard K. M. and Cao, Caleb Chen and Zhang, Nevin L.}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {2335--2345}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/xie23a/xie23a.pdf}, url = {https://proceedings.mlr.press/v216/xie23a.html}, abstract = {The need to explain the output of a deep neural network classifier is now widely recognized. While previous methods typically explain a single class in the output, we advocate explaining the whole output, which is a probability distribution over multiple classes. A whole-output explanation can help a human user gain an overall understanding of model behaviour instead of only one aspect of it. It can also provide a natural framework where one can examine the evidence used to discriminate between competing classes, and thereby obtain contrastive explanations. In this paper, we propose a contrastive whole-output explanation (CWOX) method for image classification, and evaluate it using quantitative metrics and through human subject studies. The source code of CWOX is available at https://github.com/vaynexie/CWOX.} }
Endnote
%0 Conference Paper %T Two-stage holistic and contrastive explanation of image classification %A Weiyan Xie %A Xiao-Hui Li %A Zhi Lin %A Leonard K. M. Poon %A Caleb Chen Cao %A Nevin L. Zhang %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-xie23a %I PMLR %P 2335--2345 %U https://proceedings.mlr.press/v216/xie23a.html %V 216 %X The need to explain the output of a deep neural network classifier is now widely recognized. While previous methods typically explain a single class in the output, we advocate explaining the whole output, which is a probability distribution over multiple classes. A whole-output explanation can help a human user gain an overall understanding of model behaviour instead of only one aspect of it. It can also provide a natural framework where one can examine the evidence used to discriminate between competing classes, and thereby obtain contrastive explanations. In this paper, we propose a contrastive whole-output explanation (CWOX) method for image classification, and evaluate it using quantitative metrics and through human subject studies. The source code of CWOX is available at https://github.com/vaynexie/CWOX.
APA
Xie, W., Li, X., Lin, Z., Poon, L.K.M., Cao, C.C. & Zhang, N.L.. (2023). Two-stage holistic and contrastive explanation of image classification. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:2335-2345 Available from https://proceedings.mlr.press/v216/xie23a.html.

Related Material