Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States

Julian Zimmert, Naman Agarwal, Satyen Kale
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:182-226, 2022.

Abstract

We revisit the classical online portfolio selection problem. It is widely assumed that a trade-off between computational complexity and regret is unavoidable, with Cover’s Universal Portfolios algorithm, SOFT-BAYES and ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In this paper, we present the first efficient algorithm, BISONS, that obtains polylogarithmic regret with memory and per-step running time requirements that are polynomial in the dimension, displacing ADA-BARRONS from the Pareto frontier. Additionally, we resolve a COLT 2020 open problem by showing that a certain Follow-The-Regularized-Leader algorithm with log-barrier regularization suffers an exponentially larger dependence on the dimension than previously conjectured. Thus, we rule out this algorithm as a candidate for the Pareto frontier. We also extend our algorithm and analysis to a more general problem than online portfolio selection, viz. online learning of quantum states with log loss. This algorithm, called SCHRODINGER’S-BISONS, ibs the first efficient algorithm with polylogarithmic regret for this more general problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-zimmert22a, title = {Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States}, author = {Zimmert, Julian and Agarwal, Naman and Kale, Satyen}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {182--226}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/zimmert22a/zimmert22a.pdf}, url = {https://proceedings.mlr.press/v178/zimmert22a.html}, abstract = {We revisit the classical online portfolio selection problem. It is widely assumed that a trade-off between computational complexity and regret is unavoidable, with Cover’s Universal Portfolios algorithm, SOFT-BAYES and ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In this paper, we present the first efficient algorithm, BISONS, that obtains polylogarithmic regret with memory and per-step running time requirements that are polynomial in the dimension, displacing ADA-BARRONS from the Pareto frontier. Additionally, we resolve a COLT 2020 open problem by showing that a certain Follow-The-Regularized-Leader algorithm with log-barrier regularization suffers an exponentially larger dependence on the dimension than previously conjectured. Thus, we rule out this algorithm as a candidate for the Pareto frontier. We also extend our algorithm and analysis to a more general problem than online portfolio selection, viz. online learning of quantum states with log loss. This algorithm, called SCHRODINGER’S-BISONS, ibs the first efficient algorithm with polylogarithmic regret for this more general problem.} }
Endnote
%0 Conference Paper %T Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States %A Julian Zimmert %A Naman Agarwal %A Satyen Kale %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-zimmert22a %I PMLR %P 182--226 %U https://proceedings.mlr.press/v178/zimmert22a.html %V 178 %X We revisit the classical online portfolio selection problem. It is widely assumed that a trade-off between computational complexity and regret is unavoidable, with Cover’s Universal Portfolios algorithm, SOFT-BAYES and ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In this paper, we present the first efficient algorithm, BISONS, that obtains polylogarithmic regret with memory and per-step running time requirements that are polynomial in the dimension, displacing ADA-BARRONS from the Pareto frontier. Additionally, we resolve a COLT 2020 open problem by showing that a certain Follow-The-Regularized-Leader algorithm with log-barrier regularization suffers an exponentially larger dependence on the dimension than previously conjectured. Thus, we rule out this algorithm as a candidate for the Pareto frontier. We also extend our algorithm and analysis to a more general problem than online portfolio selection, viz. online learning of quantum states with log loss. This algorithm, called SCHRODINGER’S-BISONS, ibs the first efficient algorithm with polylogarithmic regret for this more general problem.
APA
Zimmert, J., Agarwal, N. & Kale, S.. (2022). Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum States. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:182-226 Available from https://proceedings.mlr.press/v178/zimmert22a.html.

Related Material