LAST SToP for Modeling Asynchronous Time Series

Shubham Gupta, Thibaut Durand, Graham W. Taylor, Lilian Bialokozowicz
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:21297-21321, 2025.

Abstract

We present a novel prompt design for Large Language Models (LLMs) tailored to Asynchronous Time Series. Unlike regular time series, which assume values at evenly spaced time points, asynchronous time series consist of timestamped events occurring at irregular intervals, each described in natural language. Our approach effectively utilizes the rich natural language of event descriptions, allowing LLMs to benefit from their broad world knowledge for reasoning across different domains and tasks. This allows us to extend the scope of asynchronous time series analysis beyond forecasting to include tasks like anomaly detection and data imputation. We further introduce Stochastic Soft Prompting, a novel prompt-tuning mechanism that significantly improves model performance, outperforming existing finetuning methods such as QLORA. Through extensive experiments on real-world datasets, we demonstrate that our approach achieves state-of-the-art performance across different tasks and datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gupta25a, title = {{LAST} {ST}o{P} for Modeling Asynchronous Time Series}, author = {Gupta, Shubham and Durand, Thibaut and Taylor, Graham W. and Bialokozowicz, Lilian}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {21297--21321}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gupta25a/gupta25a.pdf}, url = {https://proceedings.mlr.press/v267/gupta25a.html}, abstract = {We present a novel prompt design for Large Language Models (LLMs) tailored to Asynchronous Time Series. Unlike regular time series, which assume values at evenly spaced time points, asynchronous time series consist of timestamped events occurring at irregular intervals, each described in natural language. Our approach effectively utilizes the rich natural language of event descriptions, allowing LLMs to benefit from their broad world knowledge for reasoning across different domains and tasks. This allows us to extend the scope of asynchronous time series analysis beyond forecasting to include tasks like anomaly detection and data imputation. We further introduce Stochastic Soft Prompting, a novel prompt-tuning mechanism that significantly improves model performance, outperforming existing finetuning methods such as QLORA. Through extensive experiments on real-world datasets, we demonstrate that our approach achieves state-of-the-art performance across different tasks and datasets.} }
Endnote
%0 Conference Paper %T LAST SToP for Modeling Asynchronous Time Series %A Shubham Gupta %A Thibaut Durand %A Graham W. Taylor %A Lilian Bialokozowicz %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gupta25a %I PMLR %P 21297--21321 %U https://proceedings.mlr.press/v267/gupta25a.html %V 267 %X We present a novel prompt design for Large Language Models (LLMs) tailored to Asynchronous Time Series. Unlike regular time series, which assume values at evenly spaced time points, asynchronous time series consist of timestamped events occurring at irregular intervals, each described in natural language. Our approach effectively utilizes the rich natural language of event descriptions, allowing LLMs to benefit from their broad world knowledge for reasoning across different domains and tasks. This allows us to extend the scope of asynchronous time series analysis beyond forecasting to include tasks like anomaly detection and data imputation. We further introduce Stochastic Soft Prompting, a novel prompt-tuning mechanism that significantly improves model performance, outperforming existing finetuning methods such as QLORA. Through extensive experiments on real-world datasets, we demonstrate that our approach achieves state-of-the-art performance across different tasks and datasets.
APA
Gupta, S., Durand, T., Taylor, G.W. & Bialokozowicz, L.. (2025). LAST SToP for Modeling Asynchronous Time Series. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:21297-21321 Available from https://proceedings.mlr.press/v267/gupta25a.html.

Related Material