Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors

Matei Gabriel Cosa, Marek Elias
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:11357-11378, 2025.

Abstract

Combining algorithms is one of the key techniques in learning-augmented algorithms. We consider the following problem: We are given $\ell$ heuristics for Metrical Task Systems (MTS), where each might be tailored to a different type of input instances. While processing an input instance received online, we are allowed to query the action of only one of the heuristics at each time step. Our goal is to achieve performance comparable to the best of the given heuristics. The main difficulty of our setting comes from the fact that the cost paid by a heuristic at time $t$ cannot be estimated unless the same heuristic was also queried at time $t-1$. This is related to Bandit Learning against memory bounded adversaries (Arora et al., 2012). We show how to achieve regret of $O(\text{OPT}^{2/3})$ and prove a tight lower bound based on the construction of Dekel et al. (2013).

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cosa25a, title = {Learning-Augmented Algorithms for {MTS} with Bandit Access to Multiple Predictors}, author = {Cosa, Matei Gabriel and Elias, Marek}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {11357--11378}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cosa25a/cosa25a.pdf}, url = {https://proceedings.mlr.press/v267/cosa25a.html}, abstract = {Combining algorithms is one of the key techniques in learning-augmented algorithms. We consider the following problem: We are given $\ell$ heuristics for Metrical Task Systems (MTS), where each might be tailored to a different type of input instances. While processing an input instance received online, we are allowed to query the action of only one of the heuristics at each time step. Our goal is to achieve performance comparable to the best of the given heuristics. The main difficulty of our setting comes from the fact that the cost paid by a heuristic at time $t$ cannot be estimated unless the same heuristic was also queried at time $t-1$. This is related to Bandit Learning against memory bounded adversaries (Arora et al., 2012). We show how to achieve regret of $O(\text{OPT}^{2/3})$ and prove a tight lower bound based on the construction of Dekel et al. (2013).} }
Endnote
%0 Conference Paper %T Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors %A Matei Gabriel Cosa %A Marek Elias %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cosa25a %I PMLR %P 11357--11378 %U https://proceedings.mlr.press/v267/cosa25a.html %V 267 %X Combining algorithms is one of the key techniques in learning-augmented algorithms. We consider the following problem: We are given $\ell$ heuristics for Metrical Task Systems (MTS), where each might be tailored to a different type of input instances. While processing an input instance received online, we are allowed to query the action of only one of the heuristics at each time step. Our goal is to achieve performance comparable to the best of the given heuristics. The main difficulty of our setting comes from the fact that the cost paid by a heuristic at time $t$ cannot be estimated unless the same heuristic was also queried at time $t-1$. This is related to Bandit Learning against memory bounded adversaries (Arora et al., 2012). We show how to achieve regret of $O(\text{OPT}^{2/3})$ and prove a tight lower bound based on the construction of Dekel et al. (2013).
APA
Cosa, M.G. & Elias, M.. (2025). Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:11357-11378 Available from https://proceedings.mlr.press/v267/cosa25a.html.

Related Material