[edit]
Optimal robustness-consistency tradeoffs for learning-augmented metrical task systems
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:9377-9399, 2023.
Abstract
We examine the problem of designing learning-augmented algorithms for metrical task systems (MTS) that exploit machine-learned advice while maintaining rigorous, worst-case guarantees on performance. We propose an algorithm, DART, that achieves this dual objective, providing cost within a multiplicative factor $(1+\epsilon)$ of the machine-learned advice (i.e., consistency) while ensuring cost within a multiplicative factor $2^{O(1/\epsilon)}$ of a baseline robust algorithm (i.e., robustness) for any $\epsilon > 0$. We show that this exponential tradeoff between consistency and robustness is unavoidable in general, but that in important subclasses of MTS, such as when the metric space has bounded diameter and in the $k$-server problem, our algorithm achieves improved, polynomial tradeoffs between consistency and robustness.