Online Non-Convex Learning: Following the Perturbed Leader is Optimal

Arun Sai Suggala, Praneeth Netrapalli
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:845-861, 2020.

Abstract

We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is “predictable”.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-suggala20a, title = {Online Non-Convex Learning: Following the Perturbed Leader is Optimal}, author = {Suggala, Arun Sai and Netrapalli, Praneeth}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {845--861}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/suggala20a/suggala20a.pdf}, url = {https://proceedings.mlr.press/v117/suggala20a.html}, abstract = {We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is “predictable”.} }
Endnote
%0 Conference Paper %T Online Non-Convex Learning: Following the Perturbed Leader is Optimal %A Arun Sai Suggala %A Praneeth Netrapalli %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-suggala20a %I PMLR %P 845--861 %U https://proceedings.mlr.press/v117/suggala20a.html %V 117 %X We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is “predictable”.
APA
Suggala, A.S. & Netrapalli, P.. (2020). Online Non-Convex Learning: Following the Perturbed Leader is Optimal. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:845-861 Available from https://proceedings.mlr.press/v117/suggala20a.html.

Related Material