Non-autoregressive Conditional Diffusion Models for Time Series Prediction

Lifeng Shen, James Kwok
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31016-31029, 2023.

Abstract

Recently, denoising diffusion models have led to significant breakthroughs in the generation of images, audio and text. However, it is still an open question on how to adapt their strong modeling ability to model time series. In this paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves high-quality time series prediction with the introduction of two novel conditioning mechanisms: future mixup and autoregressive initialization. Similar to teacher forcing, future mixup allows parts of the ground-truth future predictions for conditioning, while autoregressive initialization helps better initialize the model with basic time series patterns such as short-term trends. Extensive experiments are performed on nine real-world datasets. Results show that TimeDiff consistently outperforms existing time series diffusion models, and also achieves the best overall performance across a variety of the existing strong baselines (including transformers and FiLM).

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shen23d, title = {Non-autoregressive Conditional Diffusion Models for Time Series Prediction}, author = {Shen, Lifeng and Kwok, James}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31016--31029}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shen23d/shen23d.pdf}, url = {https://proceedings.mlr.press/v202/shen23d.html}, abstract = {Recently, denoising diffusion models have led to significant breakthroughs in the generation of images, audio and text. However, it is still an open question on how to adapt their strong modeling ability to model time series. In this paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves high-quality time series prediction with the introduction of two novel conditioning mechanisms: future mixup and autoregressive initialization. Similar to teacher forcing, future mixup allows parts of the ground-truth future predictions for conditioning, while autoregressive initialization helps better initialize the model with basic time series patterns such as short-term trends. Extensive experiments are performed on nine real-world datasets. Results show that TimeDiff consistently outperforms existing time series diffusion models, and also achieves the best overall performance across a variety of the existing strong baselines (including transformers and FiLM).} }
Endnote
%0 Conference Paper %T Non-autoregressive Conditional Diffusion Models for Time Series Prediction %A Lifeng Shen %A James Kwok %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shen23d %I PMLR %P 31016--31029 %U https://proceedings.mlr.press/v202/shen23d.html %V 202 %X Recently, denoising diffusion models have led to significant breakthroughs in the generation of images, audio and text. However, it is still an open question on how to adapt their strong modeling ability to model time series. In this paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves high-quality time series prediction with the introduction of two novel conditioning mechanisms: future mixup and autoregressive initialization. Similar to teacher forcing, future mixup allows parts of the ground-truth future predictions for conditioning, while autoregressive initialization helps better initialize the model with basic time series patterns such as short-term trends. Extensive experiments are performed on nine real-world datasets. Results show that TimeDiff consistently outperforms existing time series diffusion models, and also achieves the best overall performance across a variety of the existing strong baselines (including transformers and FiLM).
APA
Shen, L. & Kwok, J.. (2023). Non-autoregressive Conditional Diffusion Models for Time Series Prediction. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31016-31029 Available from https://proceedings.mlr.press/v202/shen23d.html.

Related Material