Dueling Optimization with a Monotone Adversary

Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang
Proceedings of The 35th International Conference on Algorithmic Learning Theory, PMLR 237:221-243, 2024.

Abstract

We introduce and study the problem of \textit{dueling optimization with a monotone adversary}, which is a generalization of (noiseless) dueling convex optimization. The goal is to design an online algorithm to find a minimizer $\bm{x}^{\star}$ for a function $f\colon \mathcal{X} \to \mathbb{R}$, where $\mathcal{X} \subseteq \mathbb{R}^d$. In each round, the algorithm submits a pair of guesses, i.e., $\bm{x}^{(1)}$ and $\bm{x}^{(2)}$, and the adversary responds with \textit{any} point in the space that is at least as good as both guesses. The cost of each query is the suboptimality of the worse of the two guesses; i.e., ${\max} \left( f(\bm{x}^{(1)}), f(\bm{x}^{(2)}) \right) - f(\bm{x}^{\star})$. The goal is to minimize the number of iterations required to find an $\eps$-optimal point and to minimize the total cost (regret) of the guesses over many rounds. Our main result is an efficient randomized algorithm for several natural choices of the function $f$ and set $\mathcal{X}$ that incurs cost $O(d)$ and iteration complexity $O(d\log(1/\varepsilon)^2)$. Moreover, our dependence on $d$ is asymptotically optimal, as we show examples in which any randomized algorithm for this problem must incur $\Omega(d)$ cost and iteration complexity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v237-blum24a, title = {Dueling Optimization with a Monotone Adversary}, author = {Blum, Avrim and Gupta, Meghal and Li, Gene and Manoj, Naren Sarayu and Saha, Aadirupa and Yang, Yuanyuan}, booktitle = {Proceedings of The 35th International Conference on Algorithmic Learning Theory}, pages = {221--243}, year = {2024}, editor = {Vernade, Claire and Hsu, Daniel}, volume = {237}, series = {Proceedings of Machine Learning Research}, month = {25--28 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v237/blum24a/blum24a.pdf}, url = {https://proceedings.mlr.press/v237/blum24a.html}, abstract = {We introduce and study the problem of \textit{dueling optimization with a monotone adversary}, which is a generalization of (noiseless) dueling convex optimization. The goal is to design an online algorithm to find a minimizer $\bm{x}^{\star}$ for a function $f\colon \mathcal{X} \to \mathbb{R}$, where $\mathcal{X} \subseteq \mathbb{R}^d$. In each round, the algorithm submits a pair of guesses, i.e., $\bm{x}^{(1)}$ and $\bm{x}^{(2)}$, and the adversary responds with \textit{any} point in the space that is at least as good as both guesses. The cost of each query is the suboptimality of the worse of the two guesses; i.e., ${\max} \left( f(\bm{x}^{(1)}), f(\bm{x}^{(2)}) \right) - f(\bm{x}^{\star})$. The goal is to minimize the number of iterations required to find an $\eps$-optimal point and to minimize the total cost (regret) of the guesses over many rounds. Our main result is an efficient randomized algorithm for several natural choices of the function $f$ and set $\mathcal{X}$ that incurs cost $O(d)$ and iteration complexity $O(d\log(1/\varepsilon)^2)$. Moreover, our dependence on $d$ is asymptotically optimal, as we show examples in which any randomized algorithm for this problem must incur $\Omega(d)$ cost and iteration complexity.} }
Endnote
%0 Conference Paper %T Dueling Optimization with a Monotone Adversary %A Avrim Blum %A Meghal Gupta %A Gene Li %A Naren Sarayu Manoj %A Aadirupa Saha %A Yuanyuan Yang %B Proceedings of The 35th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Claire Vernade %E Daniel Hsu %F pmlr-v237-blum24a %I PMLR %P 221--243 %U https://proceedings.mlr.press/v237/blum24a.html %V 237 %X We introduce and study the problem of \textit{dueling optimization with a monotone adversary}, which is a generalization of (noiseless) dueling convex optimization. The goal is to design an online algorithm to find a minimizer $\bm{x}^{\star}$ for a function $f\colon \mathcal{X} \to \mathbb{R}$, where $\mathcal{X} \subseteq \mathbb{R}^d$. In each round, the algorithm submits a pair of guesses, i.e., $\bm{x}^{(1)}$ and $\bm{x}^{(2)}$, and the adversary responds with \textit{any} point in the space that is at least as good as both guesses. The cost of each query is the suboptimality of the worse of the two guesses; i.e., ${\max} \left( f(\bm{x}^{(1)}), f(\bm{x}^{(2)}) \right) - f(\bm{x}^{\star})$. The goal is to minimize the number of iterations required to find an $\eps$-optimal point and to minimize the total cost (regret) of the guesses over many rounds. Our main result is an efficient randomized algorithm for several natural choices of the function $f$ and set $\mathcal{X}$ that incurs cost $O(d)$ and iteration complexity $O(d\log(1/\varepsilon)^2)$. Moreover, our dependence on $d$ is asymptotically optimal, as we show examples in which any randomized algorithm for this problem must incur $\Omega(d)$ cost and iteration complexity.
APA
Blum, A., Gupta, M., Li, G., Manoj, N.S., Saha, A. & Yang, Y.. (2024). Dueling Optimization with a Monotone Adversary. Proceedings of The 35th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 237:221-243 Available from https://proceedings.mlr.press/v237/blum24a.html.

Related Material