FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting

Yue Jiang, Yile Chen, Xiucheng Li, Qin Chao, Shuai Liu, Gao Cong
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:27492-27509, 2025.

Abstract

Time series forecasting fundamentally relies on accurately modeling complex interdependencies and shared patterns within time series data. Recent advancements, such as Spatio-Temporal Graph Neural Networks (STGNNs) and Time Series Foundation Models (TSFMs), have demonstrated promising results by effectively capturing intricate spatial and temporal dependencies across diverse real-world datasets. However, these models typically require large volumes of training data and often struggle in data-scarce scenarios. To address this limitation, we propose a framework named Few-shot Spatio-Temporal Large Language Models (FSTLLM), aimed at enhancing model robustness and predictive performance in few-shot settings. FSTLLM leverages the contextual knowledge embedded in Large Language Models (LLMs) to provide reasonable and accurate predictions. In addition, it supports the seamless integration of existing forecasting models to further boost their predicative capabilities. Experimental results on real-world datasets demonstrate the adaptability and consistently superior performance of FSTLLM over major baseline models by a significant margin. Our code is available at: https://github.com/JIANGYUE61610306/FSTLLM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jiang25a, title = {{FSTLLM}: Spatio-Temporal {LLM} for Few Shot Time Series Forecasting}, author = {Jiang, Yue and Chen, Yile and Li, Xiucheng and Chao, Qin and Liu, Shuai and Cong, Gao}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {27492--27509}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jiang25a/jiang25a.pdf}, url = {https://proceedings.mlr.press/v267/jiang25a.html}, abstract = {Time series forecasting fundamentally relies on accurately modeling complex interdependencies and shared patterns within time series data. Recent advancements, such as Spatio-Temporal Graph Neural Networks (STGNNs) and Time Series Foundation Models (TSFMs), have demonstrated promising results by effectively capturing intricate spatial and temporal dependencies across diverse real-world datasets. However, these models typically require large volumes of training data and often struggle in data-scarce scenarios. To address this limitation, we propose a framework named Few-shot Spatio-Temporal Large Language Models (FSTLLM), aimed at enhancing model robustness and predictive performance in few-shot settings. FSTLLM leverages the contextual knowledge embedded in Large Language Models (LLMs) to provide reasonable and accurate predictions. In addition, it supports the seamless integration of existing forecasting models to further boost their predicative capabilities. Experimental results on real-world datasets demonstrate the adaptability and consistently superior performance of FSTLLM over major baseline models by a significant margin. Our code is available at: https://github.com/JIANGYUE61610306/FSTLLM.} }
Endnote
%0 Conference Paper %T FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting %A Yue Jiang %A Yile Chen %A Xiucheng Li %A Qin Chao %A Shuai Liu %A Gao Cong %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jiang25a %I PMLR %P 27492--27509 %U https://proceedings.mlr.press/v267/jiang25a.html %V 267 %X Time series forecasting fundamentally relies on accurately modeling complex interdependencies and shared patterns within time series data. Recent advancements, such as Spatio-Temporal Graph Neural Networks (STGNNs) and Time Series Foundation Models (TSFMs), have demonstrated promising results by effectively capturing intricate spatial and temporal dependencies across diverse real-world datasets. However, these models typically require large volumes of training data and often struggle in data-scarce scenarios. To address this limitation, we propose a framework named Few-shot Spatio-Temporal Large Language Models (FSTLLM), aimed at enhancing model robustness and predictive performance in few-shot settings. FSTLLM leverages the contextual knowledge embedded in Large Language Models (LLMs) to provide reasonable and accurate predictions. In addition, it supports the seamless integration of existing forecasting models to further boost their predicative capabilities. Experimental results on real-world datasets demonstrate the adaptability and consistently superior performance of FSTLLM over major baseline models by a significant margin. Our code is available at: https://github.com/JIANGYUE61610306/FSTLLM.
APA
Jiang, Y., Chen, Y., Li, X., Chao, Q., Liu, S. & Cong, G.. (2025). FSTLLM: Spatio-Temporal LLM for Few Shot Time Series Forecasting. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:27492-27509 Available from https://proceedings.mlr.press/v267/jiang25a.html.

Related Material