Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation

Yu-Yang Qian, Peng Zhao, Yu-Jie Zhang, Masashi Sugiyama, Zhi-Hua Zhou
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41383-41415, 2024.

Abstract

Dynamic regret minimization offers a principled way for non-stationary online learning, where the algorithm’s performance is evaluated against changing comparators. Prevailing methods often employ a two-layer online ensemble, consisting of a group of base learners with different configurations and a meta learner that combines their outputs. Given the evident computational overhead associated with two-layer algorithms, this paper investigates how to attain optimal dynamic regret without deploying a model ensemble. To this end, we introduce the notion of underlying dynamic regret, a specific form of the general dynamic regret that can encompass many applications of interest. We show that almost optimal dynamic regret can be obtained using a single-layer model alone. This is achieved by an adaptive restart equipped with wavelet detection, wherein a novel streaming wavelet operator is introduced to online update the wavelet coefficients via a carefully designed binary indexed tree. We apply our method to the online label shift adaptation problem, leading to new algorithms with optimal dynamic regret and significantly improved computation/storage efficiency compared to prior arts. Extensive experiments validate our proposal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-qian24c, title = {Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation}, author = {Qian, Yu-Yang and Zhao, Peng and Zhang, Yu-Jie and Sugiyama, Masashi and Zhou, Zhi-Hua}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {41383--41415}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/qian24c/qian24c.pdf}, url = {https://proceedings.mlr.press/v235/qian24c.html}, abstract = {Dynamic regret minimization offers a principled way for non-stationary online learning, where the algorithm’s performance is evaluated against changing comparators. Prevailing methods often employ a two-layer online ensemble, consisting of a group of base learners with different configurations and a meta learner that combines their outputs. Given the evident computational overhead associated with two-layer algorithms, this paper investigates how to attain optimal dynamic regret without deploying a model ensemble. To this end, we introduce the notion of underlying dynamic regret, a specific form of the general dynamic regret that can encompass many applications of interest. We show that almost optimal dynamic regret can be obtained using a single-layer model alone. This is achieved by an adaptive restart equipped with wavelet detection, wherein a novel streaming wavelet operator is introduced to online update the wavelet coefficients via a carefully designed binary indexed tree. We apply our method to the online label shift adaptation problem, leading to new algorithms with optimal dynamic regret and significantly improved computation/storage efficiency compared to prior arts. Extensive experiments validate our proposal.} }
Endnote
%0 Conference Paper %T Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation %A Yu-Yang Qian %A Peng Zhao %A Yu-Jie Zhang %A Masashi Sugiyama %A Zhi-Hua Zhou %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-qian24c %I PMLR %P 41383--41415 %U https://proceedings.mlr.press/v235/qian24c.html %V 235 %X Dynamic regret minimization offers a principled way for non-stationary online learning, where the algorithm’s performance is evaluated against changing comparators. Prevailing methods often employ a two-layer online ensemble, consisting of a group of base learners with different configurations and a meta learner that combines their outputs. Given the evident computational overhead associated with two-layer algorithms, this paper investigates how to attain optimal dynamic regret without deploying a model ensemble. To this end, we introduce the notion of underlying dynamic regret, a specific form of the general dynamic regret that can encompass many applications of interest. We show that almost optimal dynamic regret can be obtained using a single-layer model alone. This is achieved by an adaptive restart equipped with wavelet detection, wherein a novel streaming wavelet operator is introduced to online update the wavelet coefficients via a carefully designed binary indexed tree. We apply our method to the online label shift adaptation problem, leading to new algorithms with optimal dynamic regret and significantly improved computation/storage efficiency compared to prior arts. Extensive experiments validate our proposal.
APA
Qian, Y., Zhao, P., Zhang, Y., Sugiyama, M. & Zhou, Z.. (2024). Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:41383-41415 Available from https://proceedings.mlr.press/v235/qian24c.html.

Related Material