Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search

Ziyad Benomar, Lorenzo Croissant, Vianney Perchet, Spyros Angelopoulos
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:3776-3805, 2025.

Abstract

One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-benomar25a, title = {Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search}, author = {Benomar, Ziyad and Croissant, Lorenzo and Perchet, Vianney and Angelopoulos, Spyros}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {3776--3805}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/benomar25a/benomar25a.pdf}, url = {https://proceedings.mlr.press/v267/benomar25a.html}, abstract = {One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.} }
Endnote
%0 Conference Paper %T Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search %A Ziyad Benomar %A Lorenzo Croissant %A Vianney Perchet %A Spyros Angelopoulos %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-benomar25a %I PMLR %P 3776--3805 %U https://proceedings.mlr.press/v267/benomar25a.html %V 267 %X One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.
APA
Benomar, Z., Croissant, L., Perchet, V. & Angelopoulos, S.. (2025). Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:3776-3805 Available from https://proceedings.mlr.press/v267/benomar25a.html.

Related Material