Example or Prototype? Learning Concept-Based Explanations in Time-Series

Christoph Obermair, Alexander Fuchs, Franz Pernkopf, Lukas Felsberger, Andrea Apollonio, Daniel Wollmann
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:816-831, 2023.

Abstract

With the continuous increase of deep learning applications in safety critical systems, the need for an interpretable decision-making process has become a priority within the research community. While there are many existing explainable artificial intelligence algorithms, a systematic assessment of the suitability of global explanation methods for different applications is not available. In this paper, we respond to this demand by systematically comparing two existing global concept-based explanation methods with our proposed global, model-agnostic concept-based explanation method for time-series data. This method is based on an autoencoder structure and derives abstract global explanations called "prototypes". The results of a human user study and a quantitative analysis show a superior performance of the proposed method, but also highlight the necessity of tailoring explanation methods to the target audience of machine learning models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-obermair23a, title = {Example or Prototype? Learning Concept-Based Explanations in Time-Series}, author = {Obermair, Christoph and Fuchs, Alexander and Pernkopf, Franz and Felsberger, Lukas and Apollonio, Andrea and Wollmann, Daniel}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {816--831}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/obermair23a/obermair23a.pdf}, url = {https://proceedings.mlr.press/v189/obermair23a.html}, abstract = {With the continuous increase of deep learning applications in safety critical systems, the need for an interpretable decision-making process has become a priority within the research community. While there are many existing explainable artificial intelligence algorithms, a systematic assessment of the suitability of global explanation methods for different applications is not available. In this paper, we respond to this demand by systematically comparing two existing global concept-based explanation methods with our proposed global, model-agnostic concept-based explanation method for time-series data. This method is based on an autoencoder structure and derives abstract global explanations called "prototypes". The results of a human user study and a quantitative analysis show a superior performance of the proposed method, but also highlight the necessity of tailoring explanation methods to the target audience of machine learning models.} }
Endnote
%0 Conference Paper %T Example or Prototype? Learning Concept-Based Explanations in Time-Series %A Christoph Obermair %A Alexander Fuchs %A Franz Pernkopf %A Lukas Felsberger %A Andrea Apollonio %A Daniel Wollmann %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-obermair23a %I PMLR %P 816--831 %U https://proceedings.mlr.press/v189/obermair23a.html %V 189 %X With the continuous increase of deep learning applications in safety critical systems, the need for an interpretable decision-making process has become a priority within the research community. While there are many existing explainable artificial intelligence algorithms, a systematic assessment of the suitability of global explanation methods for different applications is not available. In this paper, we respond to this demand by systematically comparing two existing global concept-based explanation methods with our proposed global, model-agnostic concept-based explanation method for time-series data. This method is based on an autoencoder structure and derives abstract global explanations called "prototypes". The results of a human user study and a quantitative analysis show a superior performance of the proposed method, but also highlight the necessity of tailoring explanation methods to the target audience of machine learning models.
APA
Obermair, C., Fuchs, A., Pernkopf, F., Felsberger, L., Apollonio, A. & Wollmann, D.. (2023). Example or Prototype? Learning Concept-Based Explanations in Time-Series. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:816-831 Available from https://proceedings.mlr.press/v189/obermair23a.html.

Related Material