TVE: Learning Meta-attribution for Transferable Vision Explainer

Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:50248-50267, 2024.

Abstract

Explainable machine learning significantly improves the transparency of deep neural networks. However, existing work is constrained to explaining the behavior of individual model predictions, and lacks the ability to transfer the explanation across various models and tasks. This limitation results in explaining various tasks being time- and resource-consuming. To address this problem, we introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks. Specifically, the transferability of TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution. This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explaining various downstream tasks, without the need for training on task-specific data. Empirical studies involve explaining three different architectures of vision models across three diverse downstream datasets. The experiment results indicate TVE is effective in explaining these tasks without the need for additional training on downstream data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wang24j, title = {{TVE}: Learning Meta-attribution for Transferable Vision Explainer}, author = {Wang, Guanchu and Chuang, Yu-Neng and Yang, Fan and Du, Mengnan and Chang, Chia-Yuan and Zhong, Shaochen and Liu, Zirui and Xu, Zhaozhuo and Zhou, Kaixiong and Cai, Xuanting and Hu, Xia}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {50248--50267}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wang24j/wang24j.pdf}, url = {https://proceedings.mlr.press/v235/wang24j.html}, abstract = {Explainable machine learning significantly improves the transparency of deep neural networks. However, existing work is constrained to explaining the behavior of individual model predictions, and lacks the ability to transfer the explanation across various models and tasks. This limitation results in explaining various tasks being time- and resource-consuming. To address this problem, we introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks. Specifically, the transferability of TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution. This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explaining various downstream tasks, without the need for training on task-specific data. Empirical studies involve explaining three different architectures of vision models across three diverse downstream datasets. The experiment results indicate TVE is effective in explaining these tasks without the need for additional training on downstream data.} }
Endnote
%0 Conference Paper %T TVE: Learning Meta-attribution for Transferable Vision Explainer %A Guanchu Wang %A Yu-Neng Chuang %A Fan Yang %A Mengnan Du %A Chia-Yuan Chang %A Shaochen Zhong %A Zirui Liu %A Zhaozhuo Xu %A Kaixiong Zhou %A Xuanting Cai %A Xia Hu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wang24j %I PMLR %P 50248--50267 %U https://proceedings.mlr.press/v235/wang24j.html %V 235 %X Explainable machine learning significantly improves the transparency of deep neural networks. However, existing work is constrained to explaining the behavior of individual model predictions, and lacks the ability to transfer the explanation across various models and tasks. This limitation results in explaining various tasks being time- and resource-consuming. To address this problem, we introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks. Specifically, the transferability of TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution. This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explaining various downstream tasks, without the need for training on task-specific data. Empirical studies involve explaining three different architectures of vision models across three diverse downstream datasets. The experiment results indicate TVE is effective in explaining these tasks without the need for additional training on downstream data.
APA
Wang, G., Chuang, Y., Yang, F., Du, M., Chang, C., Zhong, S., Liu, Z., Xu, Z., Zhou, K., Cai, X. & Hu, X.. (2024). TVE: Learning Meta-attribution for Transferable Vision Explainer. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:50248-50267 Available from https://proceedings.mlr.press/v235/wang24j.html.

Related Material