AND: Audio Network Dissection for Interpreting Deep Acoustic Models

Tung-Yu Wu, Yu-Xiang Lin, Tsui-Wei Weng
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:53656-53680, 2024.

Abstract

Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce AND, the first Audio Network Dissection framework that automatically establishes natural language explanations of acoustic neurons based on highly responsive audio. AND features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify AND’s precise and informative descriptions. In addition, we highlight two acoustic model behaviors with analysis by AND. First, models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts. Second, training strategies affect neuron behaviors. Supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features. Finally, we demonstrate a potential use of AND in audio model unlearning by conducting concept-specific pruning based on the descriptions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wu24q, title = {{AND}: Audio Network Dissection for Interpreting Deep Acoustic Models}, author = {Wu, Tung-Yu and Lin, Yu-Xiang and Weng, Tsui-Wei}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {53656--53680}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wu24q/wu24q.pdf}, url = {https://proceedings.mlr.press/v235/wu24q.html}, abstract = {Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce AND, the first Audio Network Dissection framework that automatically establishes natural language explanations of acoustic neurons based on highly responsive audio. AND features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify AND’s precise and informative descriptions. In addition, we highlight two acoustic model behaviors with analysis by AND. First, models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts. Second, training strategies affect neuron behaviors. Supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features. Finally, we demonstrate a potential use of AND in audio model unlearning by conducting concept-specific pruning based on the descriptions.} }
Endnote
%0 Conference Paper %T AND: Audio Network Dissection for Interpreting Deep Acoustic Models %A Tung-Yu Wu %A Yu-Xiang Lin %A Tsui-Wei Weng %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wu24q %I PMLR %P 53656--53680 %U https://proceedings.mlr.press/v235/wu24q.html %V 235 %X Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce AND, the first Audio Network Dissection framework that automatically establishes natural language explanations of acoustic neurons based on highly responsive audio. AND features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify AND’s precise and informative descriptions. In addition, we highlight two acoustic model behaviors with analysis by AND. First, models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts. Second, training strategies affect neuron behaviors. Supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features. Finally, we demonstrate a potential use of AND in audio model unlearning by conducting concept-specific pruning based on the descriptions.
APA
Wu, T., Lin, Y. & Weng, T.. (2024). AND: Audio Network Dissection for Interpreting Deep Acoustic Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:53656-53680 Available from https://proceedings.mlr.press/v235/wu24q.html.

Related Material