Differentiable Divergences Between Time Series

Mathieu Blondel, Arthur Mensch, Jean-Philippe Vert
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3853-3861, 2021.

Abstract

Computing the discrepancy between time series of variable sizes is notoriously challenging. While dynamic time warping (DTW) is popularly used for this purpose, it is not differentiable everywhere and is known to lead to bad local optima when used as a “loss”. Soft-DTW addresses these issues, but it is not a positive definite divergence: due to the bias introduced by entropic regularization, it can be negative and it is not minimized when the time series are equal. We propose in this paper a new divergence, dubbed soft-DTW divergence, which aims to correct these issues. We study its properties; in particular, under conditions on the ground cost, we show that it is a valid divergence: it is non-negative and minimized if and only if the two time series are equal. We also propose a new “sharp” variant by further removing entropic bias. We showcase our divergences on time series averaging and demonstrate significant accuracy improvements compared to both DTW and soft-DTW on 84 time series classification datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-blondel21a, title = { Differentiable Divergences Between Time Series }, author = {Blondel, Mathieu and Mensch, Arthur and Vert, Jean-Philippe}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3853--3861}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/blondel21a/blondel21a.pdf}, url = {https://proceedings.mlr.press/v130/blondel21a.html}, abstract = { Computing the discrepancy between time series of variable sizes is notoriously challenging. While dynamic time warping (DTW) is popularly used for this purpose, it is not differentiable everywhere and is known to lead to bad local optima when used as a “loss”. Soft-DTW addresses these issues, but it is not a positive definite divergence: due to the bias introduced by entropic regularization, it can be negative and it is not minimized when the time series are equal. We propose in this paper a new divergence, dubbed soft-DTW divergence, which aims to correct these issues. We study its properties; in particular, under conditions on the ground cost, we show that it is a valid divergence: it is non-negative and minimized if and only if the two time series are equal. We also propose a new “sharp” variant by further removing entropic bias. We showcase our divergences on time series averaging and demonstrate significant accuracy improvements compared to both DTW and soft-DTW on 84 time series classification datasets. } }
Endnote
%0 Conference Paper %T Differentiable Divergences Between Time Series %A Mathieu Blondel %A Arthur Mensch %A Jean-Philippe Vert %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-blondel21a %I PMLR %P 3853--3861 %U https://proceedings.mlr.press/v130/blondel21a.html %V 130 %X Computing the discrepancy between time series of variable sizes is notoriously challenging. While dynamic time warping (DTW) is popularly used for this purpose, it is not differentiable everywhere and is known to lead to bad local optima when used as a “loss”. Soft-DTW addresses these issues, but it is not a positive definite divergence: due to the bias introduced by entropic regularization, it can be negative and it is not minimized when the time series are equal. We propose in this paper a new divergence, dubbed soft-DTW divergence, which aims to correct these issues. We study its properties; in particular, under conditions on the ground cost, we show that it is a valid divergence: it is non-negative and minimized if and only if the two time series are equal. We also propose a new “sharp” variant by further removing entropic bias. We showcase our divergences on time series averaging and demonstrate significant accuracy improvements compared to both DTW and soft-DTW on 84 time series classification datasets.
APA
Blondel, M., Mensch, A. & Vert, J.. (2021). Differentiable Divergences Between Time Series . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3853-3861 Available from https://proceedings.mlr.press/v130/blondel21a.html.

Related Material