GALE: Globally Assessing Local Explanations

Peter Xenopoulos, Gromit Chan, Harish Doraiswamy, Luis Gustavo Nonato, Brian Barr, Claudio Silva
Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022, PMLR 196:322-331, 2022.

Abstract

Local explainability methods – those which seek to generate an explanation for each prediction – are increasingly prevalent. However, results from different local explainability methods are difficult to compare since they may be parameter-dependant, unstable due to sampling variability, or in various scales and dimensions. We propose GALE, a topology-based framework to extract a simplified representation from a set of local explanations. GALE models the relationship between the explanation space and model predictions to generate a topological skeleton, which we use to compare local explanation outputs. We demonstrate that GALE can not only reliably identify differences between explainability techniques but also provides stable representations. Then, we show how our framework can be used to identify appropriate parameters for local explainability methods. Our framework is simple, does not require complex optimizations, and can be broadly applied to most local explanation methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v196-xenopoulos22a, title = {GALE: Globally Assessing Local Explanations}, author = {Xenopoulos, Peter and Chan, Gromit and Doraiswamy, Harish and Nonato, Luis Gustavo and Barr, Brian and Silva, Claudio}, booktitle = {Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022}, pages = {322--331}, year = {2022}, editor = {Cloninger, Alexander and Doster, Timothy and Emerson, Tegan and Kaul, Manohar and Ktena, Ira and Kvinge, Henry and Miolane, Nina and Rieck, Bastian and Tymochko, Sarah and Wolf, Guy}, volume = {196}, series = {Proceedings of Machine Learning Research}, month = {25 Feb--22 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v196/xenopoulos22a/xenopoulos22a.pdf}, url = {https://proceedings.mlr.press/v196/xenopoulos22a.html}, abstract = { Local explainability methods – those which seek to generate an explanation for each prediction – are increasingly prevalent. However, results from different local explainability methods are difficult to compare since they may be parameter-dependant, unstable due to sampling variability, or in various scales and dimensions. We propose GALE, a topology-based framework to extract a simplified representation from a set of local explanations. GALE models the relationship between the explanation space and model predictions to generate a topological skeleton, which we use to compare local explanation outputs. We demonstrate that GALE can not only reliably identify differences between explainability techniques but also provides stable representations. Then, we show how our framework can be used to identify appropriate parameters for local explainability methods. Our framework is simple, does not require complex optimizations, and can be broadly applied to most local explanation methods. } }
Endnote
%0 Conference Paper %T GALE: Globally Assessing Local Explanations %A Peter Xenopoulos %A Gromit Chan %A Harish Doraiswamy %A Luis Gustavo Nonato %A Brian Barr %A Claudio Silva %B Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022 %C Proceedings of Machine Learning Research %D 2022 %E Alexander Cloninger %E Timothy Doster %E Tegan Emerson %E Manohar Kaul %E Ira Ktena %E Henry Kvinge %E Nina Miolane %E Bastian Rieck %E Sarah Tymochko %E Guy Wolf %F pmlr-v196-xenopoulos22a %I PMLR %P 322--331 %U https://proceedings.mlr.press/v196/xenopoulos22a.html %V 196 %X Local explainability methods – those which seek to generate an explanation for each prediction – are increasingly prevalent. However, results from different local explainability methods are difficult to compare since they may be parameter-dependant, unstable due to sampling variability, or in various scales and dimensions. We propose GALE, a topology-based framework to extract a simplified representation from a set of local explanations. GALE models the relationship between the explanation space and model predictions to generate a topological skeleton, which we use to compare local explanation outputs. We demonstrate that GALE can not only reliably identify differences between explainability techniques but also provides stable representations. Then, we show how our framework can be used to identify appropriate parameters for local explainability methods. Our framework is simple, does not require complex optimizations, and can be broadly applied to most local explanation methods.
APA
Xenopoulos, P., Chan, G., Doraiswamy, H., Nonato, L.G., Barr, B. & Silva, C.. (2022). GALE: Globally Assessing Local Explanations. Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022, in Proceedings of Machine Learning Research 196:322-331 Available from https://proceedings.mlr.press/v196/xenopoulos22a.html.

Related Material