FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks

Laines Schmalwasser, Niklas Penzel, Joachim Denzler, Julia Niebling
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:53316-53342, 2025.

Abstract

Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce FastCAV, a novel approach that accelerates the extraction of CAVs by up to 63.6$\times$ (on average 46.4$\times$). We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with FastCAV maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that FastCAV can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-schmalwasser25a, title = {{F}ast{CAV}: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks}, author = {Schmalwasser, Laines and Penzel, Niklas and Denzler, Joachim and Niebling, Julia}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {53316--53342}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/schmalwasser25a/schmalwasser25a.pdf}, url = {https://proceedings.mlr.press/v267/schmalwasser25a.html}, abstract = {Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce FastCAV, a novel approach that accelerates the extraction of CAVs by up to 63.6$\times$ (on average 46.4$\times$). We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with FastCAV maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that FastCAV can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.} }
Endnote
%0 Conference Paper %T FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks %A Laines Schmalwasser %A Niklas Penzel %A Joachim Denzler %A Julia Niebling %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-schmalwasser25a %I PMLR %P 53316--53342 %U https://proceedings.mlr.press/v267/schmalwasser25a.html %V 267 %X Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce FastCAV, a novel approach that accelerates the extraction of CAVs by up to 63.6$\times$ (on average 46.4$\times$). We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with FastCAV maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that FastCAV can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.
APA
Schmalwasser, L., Penzel, N., Denzler, J. & Niebling, J.. (2025). FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:53316-53342 Available from https://proceedings.mlr.press/v267/schmalwasser25a.html.

Related Material