Explaining Graph Neural Networks Using Interpretable Local Surrogates

Farzaneh Heidari, Perouz Taslakian, Guillaume Rabusseau
Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), PMLR 221:146-155, 2023.

Abstract

We propose an interpretable local surrogate (ILS) method for understanding the predictions of black-box graph models. Explainability methods are commonly employed to gain insights into black-box models and, given the widespread adoption of GNNs in diverse applications, understanding the underlying reasoning behind their decision-making processes becomes crucial. Our ILS method approximates the behavior of a black-box graph model by fitting a simple surrogate model in the local neighborhood of a given input example. Leveraging the interpretability of the surrogate, ILS is able to identify the most relevant nodes contributing to a specific prediction. To efficiently identify these nodes, we utilize group sparse linear models as local surrogates. Through empirical evaluations on explainability benchmarks, our method consistently outperforms state-of-the-art graph explainability methods. This demonstrates the effectiveness of our approach in providing enhanced interpretability for GNN predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v221-heidari23a, title = {Explaining Graph Neural Networks Using Interpretable Local Surrogates}, author = {Heidari, Farzaneh and Taslakian, Perouz and Rabusseau, Guillaume}, booktitle = {Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)}, pages = {146--155}, year = {2023}, editor = {Doster, Timothy and Emerson, Tegan and Kvinge, Henry and Miolane, Nina and Papillon, Mathilde and Rieck, Bastian and Sanborn, Sophia}, volume = {221}, series = {Proceedings of Machine Learning Research}, month = {28 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v221/heidari23a/heidari23a.pdf}, url = {https://proceedings.mlr.press/v221/heidari23a.html}, abstract = {We propose an interpretable local surrogate (ILS) method for understanding the predictions of black-box graph models. Explainability methods are commonly employed to gain insights into black-box models and, given the widespread adoption of GNNs in diverse applications, understanding the underlying reasoning behind their decision-making processes becomes crucial. Our ILS method approximates the behavior of a black-box graph model by fitting a simple surrogate model in the local neighborhood of a given input example. Leveraging the interpretability of the surrogate, ILS is able to identify the most relevant nodes contributing to a specific prediction. To efficiently identify these nodes, we utilize group sparse linear models as local surrogates. Through empirical evaluations on explainability benchmarks, our method consistently outperforms state-of-the-art graph explainability methods. This demonstrates the effectiveness of our approach in providing enhanced interpretability for GNN predictions.} }
Endnote
%0 Conference Paper %T Explaining Graph Neural Networks Using Interpretable Local Surrogates %A Farzaneh Heidari %A Perouz Taslakian %A Guillaume Rabusseau %B Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML) %C Proceedings of Machine Learning Research %D 2023 %E Timothy Doster %E Tegan Emerson %E Henry Kvinge %E Nina Miolane %E Mathilde Papillon %E Bastian Rieck %E Sophia Sanborn %F pmlr-v221-heidari23a %I PMLR %P 146--155 %U https://proceedings.mlr.press/v221/heidari23a.html %V 221 %X We propose an interpretable local surrogate (ILS) method for understanding the predictions of black-box graph models. Explainability methods are commonly employed to gain insights into black-box models and, given the widespread adoption of GNNs in diverse applications, understanding the underlying reasoning behind their decision-making processes becomes crucial. Our ILS method approximates the behavior of a black-box graph model by fitting a simple surrogate model in the local neighborhood of a given input example. Leveraging the interpretability of the surrogate, ILS is able to identify the most relevant nodes contributing to a specific prediction. To efficiently identify these nodes, we utilize group sparse linear models as local surrogates. Through empirical evaluations on explainability benchmarks, our method consistently outperforms state-of-the-art graph explainability methods. This demonstrates the effectiveness of our approach in providing enhanced interpretability for GNN predictions.
APA
Heidari, F., Taslakian, P. & Rabusseau, G.. (2023). Explaining Graph Neural Networks Using Interpretable Local Surrogates. Proceedings of 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), in Proceedings of Machine Learning Research 221:146-155 Available from https://proceedings.mlr.press/v221/heidari23a.html.

Related Material