Multi-resolution Time-Series Transformer for Long-term Forecasting

Yitian Zhang, Liheng Ma, Soumyasundar Pal, Yingxue Zhang, Mark Coates
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4222-4230, 2024.

Abstract

The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-zhang24l, title = {Multi-resolution Time-Series Transformer for Long-term Forecasting}, author = {Zhang, Yitian and Ma, Liheng and Pal, Soumyasundar and Zhang, Yingxue and Coates, Mark}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4222--4230}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/zhang24l/zhang24l.pdf}, url = {https://proceedings.mlr.press/v238/zhang24l.html}, abstract = {The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.} }
Endnote
%0 Conference Paper %T Multi-resolution Time-Series Transformer for Long-term Forecasting %A Yitian Zhang %A Liheng Ma %A Soumyasundar Pal %A Yingxue Zhang %A Mark Coates %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-zhang24l %I PMLR %P 4222--4230 %U https://proceedings.mlr.press/v238/zhang24l.html %V 238 %X The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.
APA
Zhang, Y., Ma, L., Pal, S., Zhang, Y. & Coates, M.. (2024). Multi-resolution Time-Series Transformer for Long-term Forecasting. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4222-4230 Available from https://proceedings.mlr.press/v238/zhang24l.html.

Related Material