Interacting Diffusion Processes for Event Sequence Forecasting

Mai Zeng, Florence Regol, Mark Coates
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:58407-58430, 2024.

Abstract

Neural Temporal Point Processes (TPPs) have emerged as the primary framework for predicting sequences of events that occur at irregular time intervals, but their sequential nature can hamper performance for long-horizon forecasts. To address this, we introduce a novel approach that incorporates a diffusion generative model. The model facilitates sequence-to-sequence prediction, allowing multi-step predictions based on historical event sequences. In contrast to previous approaches, our model directly learns the joint probability distribution of types and inter-arrival times for multiple events. The model is composed of two diffusion processes, one for the time intervals and one for the event types. These processes interact through their respective denoising functions, which can take as input intermediate representations from both processes, allowing the model to learn complex interactions. We demonstrate that our proposal outperforms state-of-the-art baselines for long-horizon forecasting of TPPs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zeng24f, title = {Interacting Diffusion Processes for Event Sequence Forecasting}, author = {Zeng, Mai and Regol, Florence and Coates, Mark}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {58407--58430}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zeng24f/zeng24f.pdf}, url = {https://proceedings.mlr.press/v235/zeng24f.html}, abstract = {Neural Temporal Point Processes (TPPs) have emerged as the primary framework for predicting sequences of events that occur at irregular time intervals, but their sequential nature can hamper performance for long-horizon forecasts. To address this, we introduce a novel approach that incorporates a diffusion generative model. The model facilitates sequence-to-sequence prediction, allowing multi-step predictions based on historical event sequences. In contrast to previous approaches, our model directly learns the joint probability distribution of types and inter-arrival times for multiple events. The model is composed of two diffusion processes, one for the time intervals and one for the event types. These processes interact through their respective denoising functions, which can take as input intermediate representations from both processes, allowing the model to learn complex interactions. We demonstrate that our proposal outperforms state-of-the-art baselines for long-horizon forecasting of TPPs.} }
Endnote
%0 Conference Paper %T Interacting Diffusion Processes for Event Sequence Forecasting %A Mai Zeng %A Florence Regol %A Mark Coates %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zeng24f %I PMLR %P 58407--58430 %U https://proceedings.mlr.press/v235/zeng24f.html %V 235 %X Neural Temporal Point Processes (TPPs) have emerged as the primary framework for predicting sequences of events that occur at irregular time intervals, but their sequential nature can hamper performance for long-horizon forecasts. To address this, we introduce a novel approach that incorporates a diffusion generative model. The model facilitates sequence-to-sequence prediction, allowing multi-step predictions based on historical event sequences. In contrast to previous approaches, our model directly learns the joint probability distribution of types and inter-arrival times for multiple events. The model is composed of two diffusion processes, one for the time intervals and one for the event types. These processes interact through their respective denoising functions, which can take as input intermediate representations from both processes, allowing the model to learn complex interactions. We demonstrate that our proposal outperforms state-of-the-art baselines for long-horizon forecasting of TPPs.
APA
Zeng, M., Regol, F. & Coates, M.. (2024). Interacting Diffusion Processes for Event Sequence Forecasting. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:58407-58430 Available from https://proceedings.mlr.press/v235/zeng24f.html.

Related Material