Context Consistency Regularization for Label Sparsity in Time Series

Yooju Shin, Susik Yoon, Hwanjun Song, Dongmin Park, Byunghyun Kim, Jae-Gil Lee, Byung Suk Lee
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31579-31595, 2023.

Abstract

Labels are typically sparse in real-world time series due to the high annotation cost. Recently, consistency regularization techniques have been used to generate artificial labels from unlabeled augmented instances. To fully exploit the sequential characteristic of time series in consistency regularization, we propose a novel method of data augmentation called context-attached augmentation, which adds preceding and succeeding instances to a target instance to form its augmented instance. Unlike the existing augmentation techniques that modify a target instance by directly perturbing its attributes, the context-attached augmentation generates instances augmented with varying contexts while maintaining the target instance. Based on our augmentation method, we propose a context consistency regularization framework, which first adds different contexts to a target instance sampled from a given time series and then shares unitary reliability-based cross-window labels across the augmented instances to maintain consistency. We demonstrate that the proposed framework outperforms the existing state-of-the-art consistency regularization frameworks through comprehensive experiments on real-world time-series datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shin23e, title = {Context Consistency Regularization for Label Sparsity in Time Series}, author = {Shin, Yooju and Yoon, Susik and Song, Hwanjun and Park, Dongmin and Kim, Byunghyun and Lee, Jae-Gil and Lee, Byung Suk}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31579--31595}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shin23e/shin23e.pdf}, url = {https://proceedings.mlr.press/v202/shin23e.html}, abstract = {Labels are typically sparse in real-world time series due to the high annotation cost. Recently, consistency regularization techniques have been used to generate artificial labels from unlabeled augmented instances. To fully exploit the sequential characteristic of time series in consistency regularization, we propose a novel method of data augmentation called context-attached augmentation, which adds preceding and succeeding instances to a target instance to form its augmented instance. Unlike the existing augmentation techniques that modify a target instance by directly perturbing its attributes, the context-attached augmentation generates instances augmented with varying contexts while maintaining the target instance. Based on our augmentation method, we propose a context consistency regularization framework, which first adds different contexts to a target instance sampled from a given time series and then shares unitary reliability-based cross-window labels across the augmented instances to maintain consistency. We demonstrate that the proposed framework outperforms the existing state-of-the-art consistency regularization frameworks through comprehensive experiments on real-world time-series datasets.} }
Endnote
%0 Conference Paper %T Context Consistency Regularization for Label Sparsity in Time Series %A Yooju Shin %A Susik Yoon %A Hwanjun Song %A Dongmin Park %A Byunghyun Kim %A Jae-Gil Lee %A Byung Suk Lee %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shin23e %I PMLR %P 31579--31595 %U https://proceedings.mlr.press/v202/shin23e.html %V 202 %X Labels are typically sparse in real-world time series due to the high annotation cost. Recently, consistency regularization techniques have been used to generate artificial labels from unlabeled augmented instances. To fully exploit the sequential characteristic of time series in consistency regularization, we propose a novel method of data augmentation called context-attached augmentation, which adds preceding and succeeding instances to a target instance to form its augmented instance. Unlike the existing augmentation techniques that modify a target instance by directly perturbing its attributes, the context-attached augmentation generates instances augmented with varying contexts while maintaining the target instance. Based on our augmentation method, we propose a context consistency regularization framework, which first adds different contexts to a target instance sampled from a given time series and then shares unitary reliability-based cross-window labels across the augmented instances to maintain consistency. We demonstrate that the proposed framework outperforms the existing state-of-the-art consistency regularization frameworks through comprehensive experiments on real-world time-series datasets.
APA
Shin, Y., Yoon, S., Song, H., Park, D., Kim, B., Lee, J. & Lee, B.S.. (2023). Context Consistency Regularization for Label Sparsity in Time Series. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31579-31595 Available from https://proceedings.mlr.press/v202/shin23e.html.

Related Material