Learning multivariate temporal point processes via the time-change theorem

Guilherme Augusto Zagatti, See Kiong Ng, Stéphane Bressan
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3241-3249, 2024.

Abstract

Marked temporal point processes (TPPs) are a class of stochastic processes that describe the occurrence of a countable number of marked events over continuous time. In machine learning, the most common representation of marked TPPs is the univariate TPP coupled with a conditional mark distribution. Alternatively, we can represent marked TPPs as a multivariate temporal point process in which we model each sequence of marks interdependently. We introduce a learning framework for multivariate TPPs leveraging recent progress on learning univariate TPPs via time-change theorems to propose a deep-learning, invertible model for the conditional intensity. We rely neither on Monte Carlo approximation for the compensator nor on thinning for sampling. Therefore, we have a generative model that can efficiently sample the next event given a history of past events. Our models show strong alignment between the percentiles of the distribution expected from theory and the empirical ones.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-augusto-zagatti24a, title = {Learning multivariate temporal point processes via the time-change theorem}, author = {Augusto Zagatti, Guilherme and Kiong Ng, See and Bressan, St\'{e}phane}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3241--3249}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/augusto-zagatti24a/augusto-zagatti24a.pdf}, url = {https://proceedings.mlr.press/v238/augusto-zagatti24a.html}, abstract = {Marked temporal point processes (TPPs) are a class of stochastic processes that describe the occurrence of a countable number of marked events over continuous time. In machine learning, the most common representation of marked TPPs is the univariate TPP coupled with a conditional mark distribution. Alternatively, we can represent marked TPPs as a multivariate temporal point process in which we model each sequence of marks interdependently. We introduce a learning framework for multivariate TPPs leveraging recent progress on learning univariate TPPs via time-change theorems to propose a deep-learning, invertible model for the conditional intensity. We rely neither on Monte Carlo approximation for the compensator nor on thinning for sampling. Therefore, we have a generative model that can efficiently sample the next event given a history of past events. Our models show strong alignment between the percentiles of the distribution expected from theory and the empirical ones.} }
Endnote
%0 Conference Paper %T Learning multivariate temporal point processes via the time-change theorem %A Guilherme Augusto Zagatti %A See Kiong Ng %A Stéphane Bressan %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-augusto-zagatti24a %I PMLR %P 3241--3249 %U https://proceedings.mlr.press/v238/augusto-zagatti24a.html %V 238 %X Marked temporal point processes (TPPs) are a class of stochastic processes that describe the occurrence of a countable number of marked events over continuous time. In machine learning, the most common representation of marked TPPs is the univariate TPP coupled with a conditional mark distribution. Alternatively, we can represent marked TPPs as a multivariate temporal point process in which we model each sequence of marks interdependently. We introduce a learning framework for multivariate TPPs leveraging recent progress on learning univariate TPPs via time-change theorems to propose a deep-learning, invertible model for the conditional intensity. We rely neither on Monte Carlo approximation for the compensator nor on thinning for sampling. Therefore, we have a generative model that can efficiently sample the next event given a history of past events. Our models show strong alignment between the percentiles of the distribution expected from theory and the empirical ones.
APA
Augusto Zagatti, G., Kiong Ng, S. & Bressan, S.. (2024). Learning multivariate temporal point processes via the time-change theorem. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3241-3249 Available from https://proceedings.mlr.press/v238/augusto-zagatti24a.html.

Related Material