Small-loss Adaptive Regret for Online Convex Optimization

Wenhao Yang, Wei Jiang, Yibo Wang, Ping Yang, Yao Hu, Lijun Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:56156-56195, 2024.

Abstract

To deal with changing environments, adaptive regret has been proposed to minimize the regret over every interval. Previous studies have established a small-loss adaptive regret bound for general convex functions under the smoothness condition, offering the advantage of being much tighter than minimax rates for benign problems. However, it remains unclear whether similar bounds are attainable for other types of convex functions, such as exp-concave and strongly convex functions. In this paper, we first propose a novel algorithm that achieves a small-loss adaptive regret bound for exp-concave and smooth function. Subsequently, to address the limitation that existing algorithms can only handle one type of convex functions, we further design a universal algorithm capable of delivering small-loss adaptive regret bounds for general convex, exp-concave, and strongly convex functions simultaneously. That is challenging because the universal algorithm follows the meta-expert framework, and we need to ensure that upper bounds for both meta-regret and expert-regret are of small-loss types. Moreover, we provide a novel analysis demonstrating that our algorithms are also equipped with minimax adaptive regret bounds when functions are non-smooth.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-yang24l, title = {Small-loss Adaptive Regret for Online Convex Optimization}, author = {Yang, Wenhao and Jiang, Wei and Wang, Yibo and Yang, Ping and Hu, Yao and Zhang, Lijun}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {56156--56195}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/yang24l/yang24l.pdf}, url = {https://proceedings.mlr.press/v235/yang24l.html}, abstract = {To deal with changing environments, adaptive regret has been proposed to minimize the regret over every interval. Previous studies have established a small-loss adaptive regret bound for general convex functions under the smoothness condition, offering the advantage of being much tighter than minimax rates for benign problems. However, it remains unclear whether similar bounds are attainable for other types of convex functions, such as exp-concave and strongly convex functions. In this paper, we first propose a novel algorithm that achieves a small-loss adaptive regret bound for exp-concave and smooth function. Subsequently, to address the limitation that existing algorithms can only handle one type of convex functions, we further design a universal algorithm capable of delivering small-loss adaptive regret bounds for general convex, exp-concave, and strongly convex functions simultaneously. That is challenging because the universal algorithm follows the meta-expert framework, and we need to ensure that upper bounds for both meta-regret and expert-regret are of small-loss types. Moreover, we provide a novel analysis demonstrating that our algorithms are also equipped with minimax adaptive regret bounds when functions are non-smooth.} }
Endnote
%0 Conference Paper %T Small-loss Adaptive Regret for Online Convex Optimization %A Wenhao Yang %A Wei Jiang %A Yibo Wang %A Ping Yang %A Yao Hu %A Lijun Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-yang24l %I PMLR %P 56156--56195 %U https://proceedings.mlr.press/v235/yang24l.html %V 235 %X To deal with changing environments, adaptive regret has been proposed to minimize the regret over every interval. Previous studies have established a small-loss adaptive regret bound for general convex functions under the smoothness condition, offering the advantage of being much tighter than minimax rates for benign problems. However, it remains unclear whether similar bounds are attainable for other types of convex functions, such as exp-concave and strongly convex functions. In this paper, we first propose a novel algorithm that achieves a small-loss adaptive regret bound for exp-concave and smooth function. Subsequently, to address the limitation that existing algorithms can only handle one type of convex functions, we further design a universal algorithm capable of delivering small-loss adaptive regret bounds for general convex, exp-concave, and strongly convex functions simultaneously. That is challenging because the universal algorithm follows the meta-expert framework, and we need to ensure that upper bounds for both meta-regret and expert-regret are of small-loss types. Moreover, we provide a novel analysis demonstrating that our algorithms are also equipped with minimax adaptive regret bounds when functions are non-smooth.
APA
Yang, W., Jiang, W., Wang, Y., Yang, P., Hu, Y. & Zhang, L.. (2024). Small-loss Adaptive Regret for Online Convex Optimization. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:56156-56195 Available from https://proceedings.mlr.press/v235/yang24l.html.

Related Material