Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks

Bjørn Leth Møller, Christian Igel, Kristoffer Knutsen Wickstrøm, Jon Sporring, Robert Jenssen, Bulat Ibragimov
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:36048-36071, 2024.

Abstract

Unsupervised representation learning has become an important ingredient of today’s deep learning systems. However, only a few methods exist that explain a learned vector embedding in the sense of providing information about which parts of an input are the most important for its representation. These methods generate the explanation for a given input after the model has been evaluated and tend to produce either inaccurate explanations or are slow, which limits their practical use. To address these limitations, we introduce the Neural Explanation Masks (NEM) framework, which turns a fixed representation model into a self-explaining model by augmenting it with a masking network. This network provides occlusion-based explanations in parallel to computing the representations during inference. We present an instance of this framework, the NEM-U (NEM using U-net structure) architecture, which leverages similarities between segmentation and occlusion-based masks. Our experiments show that NEM-U generates explanations faster and with lower complexity compared to the current state-of-the-art while maintaining high accuracy as measured by locality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-moller24a, title = {Finding {NEM}-U: Explaining unsupervised representation learning through neural network generated explanation masks}, author = {M{\o}ller, Bj{\o}rn Leth and Igel, Christian and Wickstr{\o}m, Kristoffer Knutsen and Sporring, Jon and Jenssen, Robert and Ibragimov, Bulat}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {36048--36071}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/moller24a/moller24a.pdf}, url = {https://proceedings.mlr.press/v235/moller24a.html}, abstract = {Unsupervised representation learning has become an important ingredient of today’s deep learning systems. However, only a few methods exist that explain a learned vector embedding in the sense of providing information about which parts of an input are the most important for its representation. These methods generate the explanation for a given input after the model has been evaluated and tend to produce either inaccurate explanations or are slow, which limits their practical use. To address these limitations, we introduce the Neural Explanation Masks (NEM) framework, which turns a fixed representation model into a self-explaining model by augmenting it with a masking network. This network provides occlusion-based explanations in parallel to computing the representations during inference. We present an instance of this framework, the NEM-U (NEM using U-net structure) architecture, which leverages similarities between segmentation and occlusion-based masks. Our experiments show that NEM-U generates explanations faster and with lower complexity compared to the current state-of-the-art while maintaining high accuracy as measured by locality.} }
Endnote
%0 Conference Paper %T Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks %A Bjørn Leth Møller %A Christian Igel %A Kristoffer Knutsen Wickstrøm %A Jon Sporring %A Robert Jenssen %A Bulat Ibragimov %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-moller24a %I PMLR %P 36048--36071 %U https://proceedings.mlr.press/v235/moller24a.html %V 235 %X Unsupervised representation learning has become an important ingredient of today’s deep learning systems. However, only a few methods exist that explain a learned vector embedding in the sense of providing information about which parts of an input are the most important for its representation. These methods generate the explanation for a given input after the model has been evaluated and tend to produce either inaccurate explanations or are slow, which limits their practical use. To address these limitations, we introduce the Neural Explanation Masks (NEM) framework, which turns a fixed representation model into a self-explaining model by augmenting it with a masking network. This network provides occlusion-based explanations in parallel to computing the representations during inference. We present an instance of this framework, the NEM-U (NEM using U-net structure) architecture, which leverages similarities between segmentation and occlusion-based masks. Our experiments show that NEM-U generates explanations faster and with lower complexity compared to the current state-of-the-art while maintaining high accuracy as measured by locality.
APA
Møller, B.L., Igel, C., Wickstrøm, K.K., Sporring, J., Jenssen, R. & Ibragimov, B.. (2024). Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:36048-36071 Available from https://proceedings.mlr.press/v235/moller24a.html.

Related Material