Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making

Adam Block, Max Simchowitz, Alexander Rakhlin
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:1618-1665, 2023.

Abstract

Smoothed online learning has emerged as a popular framework to mitigate the substantial loss in statistical and computational complexity that arises when one moves from classical to adversarial learning. Unfortunately, for some spaces, it has been shown that efficient algorithms suffer an exponentially worse regret than that which is minimax optimal, even when the learner has access to an optimization oracle over the space. To mitigate that exponential dependence, this work introduces a new notion of complexity, the generalized bracketing numbers, which marries constraints on the adversary to the size of the space, and shows that an instantiation of Follow-the-Perturbed-Leader can attain low regret with the number of calls to the optimization oracle scaling optimally with respect to average regret. We then instantiate our bounds in several problems of interest, including online prediction and planning of piecewise continuous functions, which has many applications in fields as diverse as econometrics and robotics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-block23b, title = {Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making}, author = {Block, Adam and Simchowitz, Max and Rakhlin, Alexander}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {1618--1665}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/block23b/block23b.pdf}, url = {https://proceedings.mlr.press/v195/block23b.html}, abstract = {Smoothed online learning has emerged as a popular framework to mitigate the substantial loss in statistical and computational complexity that arises when one moves from classical to adversarial learning. Unfortunately, for some spaces, it has been shown that efficient algorithms suffer an exponentially worse regret than that which is minimax optimal, even when the learner has access to an optimization oracle over the space. To mitigate that exponential dependence, this work introduces a new notion of complexity, the generalized bracketing numbers, which marries constraints on the adversary to the size of the space, and shows that an instantiation of Follow-the-Perturbed-Leader can attain low regret with the number of calls to the optimization oracle scaling optimally with respect to average regret. We then instantiate our bounds in several problems of interest, including online prediction and planning of piecewise continuous functions, which has many applications in fields as diverse as econometrics and robotics.} }
Endnote
%0 Conference Paper %T Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making %A Adam Block %A Max Simchowitz %A Alexander Rakhlin %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-block23b %I PMLR %P 1618--1665 %U https://proceedings.mlr.press/v195/block23b.html %V 195 %X Smoothed online learning has emerged as a popular framework to mitigate the substantial loss in statistical and computational complexity that arises when one moves from classical to adversarial learning. Unfortunately, for some spaces, it has been shown that efficient algorithms suffer an exponentially worse regret than that which is minimax optimal, even when the learner has access to an optimization oracle over the space. To mitigate that exponential dependence, this work introduces a new notion of complexity, the generalized bracketing numbers, which marries constraints on the adversary to the size of the space, and shows that an instantiation of Follow-the-Perturbed-Leader can attain low regret with the number of calls to the optimization oracle scaling optimally with respect to average regret. We then instantiate our bounds in several problems of interest, including online prediction and planning of piecewise continuous functions, which has many applications in fields as diverse as econometrics and robotics.
APA
Block, A., Simchowitz, M. & Rakhlin, A.. (2023). Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:1618-1665 Available from https://proceedings.mlr.press/v195/block23b.html.

Related Material