A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

Weili Nie, Yang Zhang, Ankit Patel
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3809-3818, 2018.

Abstract

Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-nie18a, title = {A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations}, author = {Nie, Weili and Zhang, Yang and Patel, Ankit}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3809--3818}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/nie18a/nie18a.pdf}, url = {https://proceedings.mlr.press/v80/nie18a.html}, abstract = {Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.} }
Endnote
%0 Conference Paper %T A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations %A Weili Nie %A Yang Zhang %A Ankit Patel %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-nie18a %I PMLR %P 3809--3818 %U https://proceedings.mlr.press/v80/nie18a.html %V 80 %X Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.
APA
Nie, W., Zhang, Y. & Patel, A.. (2018). A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3809-3818 Available from https://proceedings.mlr.press/v80/nie18a.html.

Related Material