Online Covering with Multiple Experts

Kim Thang Nguyen
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-36, 2026.

Abstract

Designing online algorithms with machine learning predictions is a recent approach that extends beyond the worst-case paradigm for various practically relevant online problems, such as scheduling, caching, and clustering. While most previous learning-augmented algorithms focus on integrating the predictions of a single oracle, we study the design of online algorithms with \emph{multiple} prediction sources (experts). To go beyond the performance guarantee of the popular static best expert in hindsight benchmark, we introduce a new benchmark that can be viewed as a linear combination of predictions that evolve over time. We present competitive algorithms in the new dynamic benchmark for $0$-$1$ online covering problems with a performance guarantee of $O(\log K)$ if the objective is linear and $O(\ln(K)) \cdot \frac{\lambda}{(1-\mu\ln(K))}$ if the objective is non-linear, where $K$ is the number of experts and $(\lambda, \mu)$ are parameters of the objective function. Our approach gives a new perspective on combining multiple algorithms in an online manner (a central subject in the online algorithm research community) using machine learning techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-nguyen26a, title = {Online Covering with Multiple Experts}, author = {Nguyen, Kim Thang}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--36}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/nguyen26a/nguyen26a.pdf}, url = {https://proceedings.mlr.press/v313/nguyen26a.html}, abstract = {Designing online algorithms with machine learning predictions is a recent approach that extends beyond the worst-case paradigm for various practically relevant online problems, such as scheduling, caching, and clustering. While most previous learning-augmented algorithms focus on integrating the predictions of a single oracle, we study the design of online algorithms with \emph{multiple} prediction sources (experts). To go beyond the performance guarantee of the popular static best expert in hindsight benchmark, we introduce a new benchmark that can be viewed as a linear combination of predictions that evolve over time. We present competitive algorithms in the new dynamic benchmark for $0$-$1$ online covering problems with a performance guarantee of $O(\log K)$ if the objective is linear and $O(\ln(K)) \cdot \frac{\lambda}{(1-\mu\ln(K))}$ if the objective is non-linear, where $K$ is the number of experts and $(\lambda, \mu)$ are parameters of the objective function. Our approach gives a new perspective on combining multiple algorithms in an online manner (a central subject in the online algorithm research community) using machine learning techniques.} }
Endnote
%0 Conference Paper %T Online Covering with Multiple Experts %A Kim Thang Nguyen %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-nguyen26a %I PMLR %P 1--36 %U https://proceedings.mlr.press/v313/nguyen26a.html %V 313 %X Designing online algorithms with machine learning predictions is a recent approach that extends beyond the worst-case paradigm for various practically relevant online problems, such as scheduling, caching, and clustering. While most previous learning-augmented algorithms focus on integrating the predictions of a single oracle, we study the design of online algorithms with \emph{multiple} prediction sources (experts). To go beyond the performance guarantee of the popular static best expert in hindsight benchmark, we introduce a new benchmark that can be viewed as a linear combination of predictions that evolve over time. We present competitive algorithms in the new dynamic benchmark for $0$-$1$ online covering problems with a performance guarantee of $O(\log K)$ if the objective is linear and $O(\ln(K)) \cdot \frac{\lambda}{(1-\mu\ln(K))}$ if the objective is non-linear, where $K$ is the number of experts and $(\lambda, \mu)$ are parameters of the objective function. Our approach gives a new perspective on combining multiple algorithms in an online manner (a central subject in the online algorithm research community) using machine learning techniques.
APA
Nguyen, K.T.. (2026). Online Covering with Multiple Experts. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-36 Available from https://proceedings.mlr.press/v313/nguyen26a.html.

Related Material