Dueling Convex Optimization with General Preferences

Aadirupa Saha, Tomer Koren, Yishay Mansour
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:52552-52564, 2025.

Abstract

We address the problem of convex optimization with dueling feedback, where the goal is to minimize a convex function given a weaker form of dueling feedback. Each query consists of two points and the dueling feedback returns a (noisy) single-bit binary comparison of the function values of the two queried points. The translation of the function values to the single comparison bit is through a transfer function. This problem has been addressed previously for some restricted classes of transfer functions, but here we consider a very general transfer function class which includes all functions that admit a series expansion about the origin. Our main contribution is an efficient algorithm with convergence rate of $O(\epsilon^{-4p})$ for smooth convex functions, and an optimal rate of $\widetilde O(\epsilon^{-2p})$ when the objective is both smooth and strongly convex, where $p$ is the minimal degree (with a non-zero coefficient) in the transfer’s series expansion about the origin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-saha25a, title = {Dueling Convex Optimization with General Preferences}, author = {Saha, Aadirupa and Koren, Tomer and Mansour, Yishay}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {52552--52564}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/saha25a/saha25a.pdf}, url = {https://proceedings.mlr.press/v267/saha25a.html}, abstract = {We address the problem of convex optimization with dueling feedback, where the goal is to minimize a convex function given a weaker form of dueling feedback. Each query consists of two points and the dueling feedback returns a (noisy) single-bit binary comparison of the function values of the two queried points. The translation of the function values to the single comparison bit is through a transfer function. This problem has been addressed previously for some restricted classes of transfer functions, but here we consider a very general transfer function class which includes all functions that admit a series expansion about the origin. Our main contribution is an efficient algorithm with convergence rate of $O(\epsilon^{-4p})$ for smooth convex functions, and an optimal rate of $\widetilde O(\epsilon^{-2p})$ when the objective is both smooth and strongly convex, where $p$ is the minimal degree (with a non-zero coefficient) in the transfer’s series expansion about the origin.} }
Endnote
%0 Conference Paper %T Dueling Convex Optimization with General Preferences %A Aadirupa Saha %A Tomer Koren %A Yishay Mansour %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-saha25a %I PMLR %P 52552--52564 %U https://proceedings.mlr.press/v267/saha25a.html %V 267 %X We address the problem of convex optimization with dueling feedback, where the goal is to minimize a convex function given a weaker form of dueling feedback. Each query consists of two points and the dueling feedback returns a (noisy) single-bit binary comparison of the function values of the two queried points. The translation of the function values to the single comparison bit is through a transfer function. This problem has been addressed previously for some restricted classes of transfer functions, but here we consider a very general transfer function class which includes all functions that admit a series expansion about the origin. Our main contribution is an efficient algorithm with convergence rate of $O(\epsilon^{-4p})$ for smooth convex functions, and an optimal rate of $\widetilde O(\epsilon^{-2p})$ when the objective is both smooth and strongly convex, where $p$ is the minimal degree (with a non-zero coefficient) in the transfer’s series expansion about the origin.
APA
Saha, A., Koren, T. & Mansour, Y.. (2025). Dueling Convex Optimization with General Preferences. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:52552-52564 Available from https://proceedings.mlr.press/v267/saha25a.html.

Related Material