One Wave To Explain Them All: A Unifying Perspective On Feature Attribution

Gabriel Kasmi, Amandine Brunetto, Thomas Fel, Jayneel Parekh
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:29265-29293, 2025.

Abstract

Feature attribution methods aim to improve the transparency of deep neural networks by identifying the input features that influence a model’s decision. Pixel-based heatmaps have become the standard for attributing features to high-dimensional inputs, such as images, audio representations, and volumes. While intuitive and convenient, these pixel-based attributions fail to capture the underlying structure of the data. Moreover, the choice of domain for computing attributions has often been overlooked. This work demonstrates that the wavelet domain allows for informative and meaningful attributions. It handles any input dimension and offers a unified approach to feature attribution. Our method, the Wavelet Attribution Method (WAM), leverages the spatial and scale-localized properties of wavelet coefficients to provide explanations that capture both the where and what of a model’s decision-making process. We show that WAM quantitatively matches or outperforms existing gradient-based methods across multiple modalities, including audio, images, and volumes. Additionally, we discuss how WAM bridges attribution with broader aspects of model robustness and transparency. Project page: https://gabrielkasmi.github.io/wam/

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kasmi25a, title = {One Wave To Explain Them All: A Unifying Perspective On Feature Attribution}, author = {Kasmi, Gabriel and Brunetto, Amandine and Fel, Thomas and Parekh, Jayneel}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {29265--29293}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kasmi25a/kasmi25a.pdf}, url = {https://proceedings.mlr.press/v267/kasmi25a.html}, abstract = {Feature attribution methods aim to improve the transparency of deep neural networks by identifying the input features that influence a model’s decision. Pixel-based heatmaps have become the standard for attributing features to high-dimensional inputs, such as images, audio representations, and volumes. While intuitive and convenient, these pixel-based attributions fail to capture the underlying structure of the data. Moreover, the choice of domain for computing attributions has often been overlooked. This work demonstrates that the wavelet domain allows for informative and meaningful attributions. It handles any input dimension and offers a unified approach to feature attribution. Our method, the Wavelet Attribution Method (WAM), leverages the spatial and scale-localized properties of wavelet coefficients to provide explanations that capture both the where and what of a model’s decision-making process. We show that WAM quantitatively matches or outperforms existing gradient-based methods across multiple modalities, including audio, images, and volumes. Additionally, we discuss how WAM bridges attribution with broader aspects of model robustness and transparency. Project page: https://gabrielkasmi.github.io/wam/} }
Endnote
%0 Conference Paper %T One Wave To Explain Them All: A Unifying Perspective On Feature Attribution %A Gabriel Kasmi %A Amandine Brunetto %A Thomas Fel %A Jayneel Parekh %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kasmi25a %I PMLR %P 29265--29293 %U https://proceedings.mlr.press/v267/kasmi25a.html %V 267 %X Feature attribution methods aim to improve the transparency of deep neural networks by identifying the input features that influence a model’s decision. Pixel-based heatmaps have become the standard for attributing features to high-dimensional inputs, such as images, audio representations, and volumes. While intuitive and convenient, these pixel-based attributions fail to capture the underlying structure of the data. Moreover, the choice of domain for computing attributions has often been overlooked. This work demonstrates that the wavelet domain allows for informative and meaningful attributions. It handles any input dimension and offers a unified approach to feature attribution. Our method, the Wavelet Attribution Method (WAM), leverages the spatial and scale-localized properties of wavelet coefficients to provide explanations that capture both the where and what of a model’s decision-making process. We show that WAM quantitatively matches or outperforms existing gradient-based methods across multiple modalities, including audio, images, and volumes. Additionally, we discuss how WAM bridges attribution with broader aspects of model robustness and transparency. Project page: https://gabrielkasmi.github.io/wam/
APA
Kasmi, G., Brunetto, A., Fel, T. & Parekh, J.. (2025). One Wave To Explain Them All: A Unifying Perspective On Feature Attribution. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:29265-29293 Available from https://proceedings.mlr.press/v267/kasmi25a.html.

Related Material