Redundancy Undermines the Trustworthiness of Self-Interpretable GNNs

Wenxin Tai, Ting Zhong, Goce Trajcevski, Fan Zhou
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:58169-58188, 2025.

Abstract

This work presents a systematic investigation into the trustworthiness of explanations generated by self-interpretable graph neural networks (GNNs), revealing why models trained with different random seeds yield inconsistent explanations. We identify redundancy—resulting from weak conciseness constraints—as the root cause of both explanation inconsistency and its associated inaccuracy, ultimately hindering user trust and limiting GNN deployment in high-stakes applications. Our analysis demonstrates that redundancy is difficult to eliminate; however, a simple ensemble strategy can mitigate its detrimental effects. We validate our findings through extensive experiments across diverse datasets, model architectures, and self-interpretable GNN frameworks, providing a benchmark to guide future research on addressing redundancy and advancing GNN deployment in critical domains. Our code is available at https://github.com/ICDM-UESTC/TrustworthyExplanation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-tai25a, title = {Redundancy Undermines the Trustworthiness of Self-Interpretable {GNN}s}, author = {Tai, Wenxin and Zhong, Ting and Trajcevski, Goce and Zhou, Fan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {58169--58188}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/tai25a/tai25a.pdf}, url = {https://proceedings.mlr.press/v267/tai25a.html}, abstract = {This work presents a systematic investigation into the trustworthiness of explanations generated by self-interpretable graph neural networks (GNNs), revealing why models trained with different random seeds yield inconsistent explanations. We identify redundancy—resulting from weak conciseness constraints—as the root cause of both explanation inconsistency and its associated inaccuracy, ultimately hindering user trust and limiting GNN deployment in high-stakes applications. Our analysis demonstrates that redundancy is difficult to eliminate; however, a simple ensemble strategy can mitigate its detrimental effects. We validate our findings through extensive experiments across diverse datasets, model architectures, and self-interpretable GNN frameworks, providing a benchmark to guide future research on addressing redundancy and advancing GNN deployment in critical domains. Our code is available at https://github.com/ICDM-UESTC/TrustworthyExplanation.} }
Endnote
%0 Conference Paper %T Redundancy Undermines the Trustworthiness of Self-Interpretable GNNs %A Wenxin Tai %A Ting Zhong %A Goce Trajcevski %A Fan Zhou %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-tai25a %I PMLR %P 58169--58188 %U https://proceedings.mlr.press/v267/tai25a.html %V 267 %X This work presents a systematic investigation into the trustworthiness of explanations generated by self-interpretable graph neural networks (GNNs), revealing why models trained with different random seeds yield inconsistent explanations. We identify redundancy—resulting from weak conciseness constraints—as the root cause of both explanation inconsistency and its associated inaccuracy, ultimately hindering user trust and limiting GNN deployment in high-stakes applications. Our analysis demonstrates that redundancy is difficult to eliminate; however, a simple ensemble strategy can mitigate its detrimental effects. We validate our findings through extensive experiments across diverse datasets, model architectures, and self-interpretable GNN frameworks, providing a benchmark to guide future research on addressing redundancy and advancing GNN deployment in critical domains. Our code is available at https://github.com/ICDM-UESTC/TrustworthyExplanation.
APA
Tai, W., Zhong, T., Trajcevski, G. & Zhou, F.. (2025). Redundancy Undermines the Trustworthiness of Self-Interpretable GNNs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:58169-58188 Available from https://proceedings.mlr.press/v267/tai25a.html.

Related Material