Open Problem: Fast and Optimal Online Portfolio Selection

Tim Van Erven, Dirk Van der Hoeven, Wojciech Kotłowski, Wouter M. Koolen
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:3864-3869, 2020.

Abstract

Online portfolio selection has received much attention in the COLT community since its introduction by Cover, but all state-of-the-art methods fall short in at least one of the following ways: they are either i) computationally infeasible; or ii) they do not guarantee optimal regret; or iii) they assume the gradients are bounded, which is unnecessary and cannot be guaranteed. We are interested in a natural follow-the-regularized-leader (FTRL) approach based on the log barrier regularizer, which is computationally feasible. The open problem we put before the community is to formally prove whether this approach achieves the optimal regret. Resolving this question will likely lead to new techniques to analyse FTRL algorithms. There are also interesting technical connections to self-concordance, which has previously been used in the context of bandit convex optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-van-erven20a, title = {Open Problem: Fast and Optimal Online Portfolio Selection}, author = {{Van Erven}, Tim and {Van der Hoeven}, Dirk and Kot{\l}owski, Wojciech and Koolen, Wouter M.}, pages = {3864--3869}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/van-erven20a/van-erven20a.pdf}, url = {http://proceedings.mlr.press/v125/van-erven20a.html}, abstract = {Online portfolio selection has received much attention in the COLT community since its introduction by Cover, but all state-of-the-art methods fall short in at least one of the following ways: they are either i) computationally infeasible; or ii) they do not guarantee optimal regret; or iii) they assume the gradients are bounded, which is unnecessary and cannot be guaranteed. We are interested in a natural follow-the-regularized-leader (FTRL) approach based on the log barrier regularizer, which is computationally feasible. The open problem we put before the community is to formally prove whether this approach achieves the optimal regret. Resolving this question will likely lead to new techniques to analyse FTRL algorithms. There are also interesting technical connections to self-concordance, which has previously been used in the context of bandit convex optimization.} }
Endnote
%0 Conference Paper %T Open Problem: Fast and Optimal Online Portfolio Selection %A Tim Van Erven %A Dirk Van der Hoeven %A Wojciech Kotłowski %A Wouter M. Koolen %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-van-erven20a %I PMLR %J Proceedings of Machine Learning Research %P 3864--3869 %U http://proceedings.mlr.press %V 125 %W PMLR %X Online portfolio selection has received much attention in the COLT community since its introduction by Cover, but all state-of-the-art methods fall short in at least one of the following ways: they are either i) computationally infeasible; or ii) they do not guarantee optimal regret; or iii) they assume the gradients are bounded, which is unnecessary and cannot be guaranteed. We are interested in a natural follow-the-regularized-leader (FTRL) approach based on the log barrier regularizer, which is computationally feasible. The open problem we put before the community is to formally prove whether this approach achieves the optimal regret. Resolving this question will likely lead to new techniques to analyse FTRL algorithms. There are also interesting technical connections to self-concordance, which has previously been used in the context of bandit convex optimization.
APA
Van Erven, T., Van der Hoeven, D., Kotłowski, W. & Koolen, W.M.. (2020). Open Problem: Fast and Optimal Online Portfolio Selection. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:3864-3869

Related Material