[edit]
Online Non-Convex Learning: Following the Perturbed Leader is Optimal
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:845-861, 2020.
Abstract
We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is “predictable”.