Pathwise Explanation of ReLU Neural Networks

Seongwoo Lim, Won Jo, Joohyung Lee, Jaesik Choi
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4645-4653, 2024.

Abstract

Neural networks have demonstrated a wide range of successes, but their “black box" nature raises concerns about transparency and reliability. Previous research on ReLU networks has sought to unwrap these networks into linear models based on activation states of all hidden units. In this paper, we introduce a novel approach that considers subsets of the hidden units involved in the decision making path. This pathwise explanation provides a clearer and more consistent understanding of the relationship between the input and the decision-making process. Our method also offers flexibility in adjusting the range of explanations within the input, i.e., from an overall attribution input to particular components within the input. Furthermore, it allows for the decomposition of explanations for a given input for more detailed explanations. Our experiments demonstrate that the proposed method outperforms existing methods both quantitatively and qualitatively.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-lim24a, title = { Pathwise Explanation of {ReLU} Neural Networks }, author = {Lim, Seongwoo and Jo, Won and Lee, Joohyung and Choi, Jaesik}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4645--4653}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/lim24a/lim24a.pdf}, url = {https://proceedings.mlr.press/v238/lim24a.html}, abstract = { Neural networks have demonstrated a wide range of successes, but their “black box" nature raises concerns about transparency and reliability. Previous research on ReLU networks has sought to unwrap these networks into linear models based on activation states of all hidden units. In this paper, we introduce a novel approach that considers subsets of the hidden units involved in the decision making path. This pathwise explanation provides a clearer and more consistent understanding of the relationship between the input and the decision-making process. Our method also offers flexibility in adjusting the range of explanations within the input, i.e., from an overall attribution input to particular components within the input. Furthermore, it allows for the decomposition of explanations for a given input for more detailed explanations. Our experiments demonstrate that the proposed method outperforms existing methods both quantitatively and qualitatively. } }
Endnote
%0 Conference Paper %T Pathwise Explanation of ReLU Neural Networks %A Seongwoo Lim %A Won Jo %A Joohyung Lee %A Jaesik Choi %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-lim24a %I PMLR %P 4645--4653 %U https://proceedings.mlr.press/v238/lim24a.html %V 238 %X Neural networks have demonstrated a wide range of successes, but their “black box" nature raises concerns about transparency and reliability. Previous research on ReLU networks has sought to unwrap these networks into linear models based on activation states of all hidden units. In this paper, we introduce a novel approach that considers subsets of the hidden units involved in the decision making path. This pathwise explanation provides a clearer and more consistent understanding of the relationship between the input and the decision-making process. Our method also offers flexibility in adjusting the range of explanations within the input, i.e., from an overall attribution input to particular components within the input. Furthermore, it allows for the decomposition of explanations for a given input for more detailed explanations. Our experiments demonstrate that the proposed method outperforms existing methods both quantitatively and qualitatively.
APA
Lim, S., Jo, W., Lee, J. & Choi, J.. (2024). Pathwise Explanation of ReLU Neural Networks . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4645-4653 Available from https://proceedings.mlr.press/v238/lim24a.html.

Related Material