[edit]
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:53656-53680, 2024.
Abstract
Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce AND, the first Audio Network Dissection framework that automatically establishes natural language explanations of acoustic neurons based on highly responsive audio. AND features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify AND’s precise and informative descriptions. In addition, we highlight two acoustic model behaviors with analysis by AND. First, models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts. Second, training strategies affect neuron behaviors. Supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features. Finally, we demonstrate a potential use of AND in audio model unlearning by conducting concept-specific pruning based on the descriptions.