[edit]
Dueling Optimization with a Monotone Adversary
Proceedings of The 35th International Conference on Algorithmic Learning Theory, PMLR 237:221-243, 2024.
Abstract
We introduce and study the problem of \textit{dueling optimization with a monotone adversary}, which is a generalization of (noiseless) dueling convex optimization. The goal is to design an online algorithm to find a minimizer \bmx⋆ for a function f:X→R, where X⊆Rd. In each round, the algorithm submits a pair of guesses, i.e., \bmx(1) and \bmx(2), and the adversary responds with \textit{any} point in the space that is at least as good as both guesses. The cost of each query is the suboptimality of the worse of the two guesses; i.e., max. The goal is to minimize the number of iterations required to find an \eps-optimal point and to minimize the total cost (regret) of the guesses over many rounds. Our main result is an efficient randomized algorithm for several natural choices of the function f and set \mathcal{X} that incurs cost O(d) and iteration complexity O(d\log(1/\varepsilon)^2). Moreover, our dependence on d is asymptotically optimal, as we show examples in which any randomized algorithm for this problem must incur \Omega(d) cost and iteration complexity.