Incorporating the Cycle Inductive Bias in Masked Autoencoders

Stuart Gallina Ottersen, Kerstin Bach
Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), PMLR 307:319-327, 2026.

Abstract

Many time series exhibit cyclic structure for example, in physiological signals such as ECG or EEG yet most representation learning methods treat them as generic sequences. We propose a masked autoencoder (MAE) framework that explicitly leverages cycles as an inductive bias for more efficient and effective time-series modelling. Our method decomposes sequences into cycles and trains the model to reconstruct masked segments at both the cycle and sequence level. This cycle-based decomposition shortens the effective sequence length processed by the encoder by up to a factor of ten in our experiments, yielding substantial computational savings without loss in reconstruction quality. At the same time, the approach exposes the encoder to a greater diversity of temporal patterns, as each cycle forms an additional training instance, which enhances the ability to capture subtle intra-cycle variations. Empirically, our framework outperforms three competitive baselines across four cyclic datasets, while also reducing training time on larger datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v307-ottersen26a, title = {Incorporating the Cycle Inductive Bias in Masked Autoencoders}, author = {Ottersen, Stuart Gallina and Bach, Kerstin}, booktitle = {Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL)}, pages = {319--327}, year = {2026}, editor = {Kim, Hyeongji and Ramírez Rivera, Adín and Ricaud, Benjamin}, volume = {307}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v307/main/assets/ottersen26a/ottersen26a.pdf}, url = {https://proceedings.mlr.press/v307/ottersen26a.html}, abstract = {Many time series exhibit cyclic structure for example, in physiological signals such as ECG or EEG yet most representation learning methods treat them as generic sequences. We propose a masked autoencoder (MAE) framework that explicitly leverages cycles as an inductive bias for more efficient and effective time-series modelling. Our method decomposes sequences into cycles and trains the model to reconstruct masked segments at both the cycle and sequence level. This cycle-based decomposition shortens the effective sequence length processed by the encoder by up to a factor of ten in our experiments, yielding substantial computational savings without loss in reconstruction quality. At the same time, the approach exposes the encoder to a greater diversity of temporal patterns, as each cycle forms an additional training instance, which enhances the ability to capture subtle intra-cycle variations. Empirically, our framework outperforms three competitive baselines across four cyclic datasets, while also reducing training time on larger datasets.} }
Endnote
%0 Conference Paper %T Incorporating the Cycle Inductive Bias in Masked Autoencoders %A Stuart Gallina Ottersen %A Kerstin Bach %B Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL) %C Proceedings of Machine Learning Research %D 2026 %E Hyeongji Kim %E Adín Ramírez Rivera %E Benjamin Ricaud %F pmlr-v307-ottersen26a %I PMLR %P 319--327 %U https://proceedings.mlr.press/v307/ottersen26a.html %V 307 %X Many time series exhibit cyclic structure for example, in physiological signals such as ECG or EEG yet most representation learning methods treat them as generic sequences. We propose a masked autoencoder (MAE) framework that explicitly leverages cycles as an inductive bias for more efficient and effective time-series modelling. Our method decomposes sequences into cycles and trains the model to reconstruct masked segments at both the cycle and sequence level. This cycle-based decomposition shortens the effective sequence length processed by the encoder by up to a factor of ten in our experiments, yielding substantial computational savings without loss in reconstruction quality. At the same time, the approach exposes the encoder to a greater diversity of temporal patterns, as each cycle forms an additional training instance, which enhances the ability to capture subtle intra-cycle variations. Empirically, our framework outperforms three competitive baselines across four cyclic datasets, while also reducing training time on larger datasets.
APA
Ottersen, S.G. & Bach, K.. (2026). Incorporating the Cycle Inductive Bias in Masked Autoencoders. Proceedings of the 7th Northern Lights Deep Learning Conference (NLDL), in Proceedings of Machine Learning Research 307:319-327 Available from https://proceedings.mlr.press/v307/ottersen26a.html.

Related Material