Self-Interpretable Time Series Prediction with Counterfactual Explanations

Jingquan Yan, Hao Wang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:39110-39125, 2023.

Abstract

Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-yan23d, title = {Self-Interpretable Time Series Prediction with Counterfactual Explanations}, author = {Yan, Jingquan and Wang, Hao}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {39110--39125}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/yan23d/yan23d.pdf}, url = {https://proceedings.mlr.press/v202/yan23d.html}, abstract = {Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.} }
Endnote
%0 Conference Paper %T Self-Interpretable Time Series Prediction with Counterfactual Explanations %A Jingquan Yan %A Hao Wang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-yan23d %I PMLR %P 39110--39125 %U https://proceedings.mlr.press/v202/yan23d.html %V 202 %X Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.
APA
Yan, J. & Wang, H.. (2023). Self-Interpretable Time Series Prediction with Counterfactual Explanations. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:39110-39125 Available from https://proceedings.mlr.press/v202/yan23d.html.

Related Material