[edit]
Example or Prototype? Learning Concept-Based Explanations in Time-Series
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:816-831, 2023.
Abstract
With the continuous increase of deep learning
applications in safety critical systems, the need
for an interpretable decision-making process has
become a priority within the research
community. While there are many existing explainable
artificial intelligence algorithms, a systematic
assessment of the suitability of global explanation
methods for different applications is not
available. In this paper, we respond to this demand
by systematically comparing two existing global
concept-based explanation methods with our proposed
global, model-agnostic concept-based explanation
method for time-series data. This method is based on
an autoencoder structure and derives abstract global
explanations called "prototypes". The results of a
human user study and a quantitative analysis show a
superior performance of the proposed method, but
also highlight the necessity of tailoring
explanation methods to the target audience of
machine learning models.