Axiomatic Explainer Globalness via Optimal Transport

Davin Hill, Joshua Bone, Aria Masoomi, Max Torop, Jennifer Dy
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1351-1359, 2025.

Abstract

Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-hill25a, title = {Axiomatic Explainer Globalness via Optimal Transport}, author = {Hill, Davin and Bone, Joshua and Masoomi, Aria and Torop, Max and Dy, Jennifer}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1351--1359}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/hill25a/hill25a.pdf}, url = {https://proceedings.mlr.press/v258/hill25a.html}, abstract = {Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.} }
Endnote
%0 Conference Paper %T Axiomatic Explainer Globalness via Optimal Transport %A Davin Hill %A Joshua Bone %A Aria Masoomi %A Max Torop %A Jennifer Dy %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-hill25a %I PMLR %P 1351--1359 %U https://proceedings.mlr.press/v258/hill25a.html %V 258 %X Explainability methods are often challenging to evaluate and compare. With a multitude of explainers available, practitioners must often compare and select explainers based on quantitative evaluation metrics. One particular differentiator between explainers is the diversity of explanations for a given dataset; i.e. whether all explanations are identical, unique and uniformly distributed, or somewhere between these two extremes. In this work, we define a complexity measure for explainers, globalness, which enables deeper understanding of the distribution of explanations produced by feature attribution and feature selection methods for a given dataset. We establish the axiomatic properties that any such measure should possess and prove that our proposed measure, Wasserstein Globalness, meets these criteria. We validate the utility of Wasserstein Globalness using image, tabular, and synthetic datasets, empirically showing that it both facilitates meaningful comparison between explainers and improves the selection process for explainability methods.
APA
Hill, D., Bone, J., Masoomi, A., Torop, M. & Dy, J.. (2025). Axiomatic Explainer Globalness via Optimal Transport. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1351-1359 Available from https://proceedings.mlr.press/v258/hill25a.html.

Related Material