On Computable Online Learning

Niki Hasrati, Shai Ben-David
Proceedings of The 34th International Conference on Algorithmic Learning Theory, PMLR 201:707-725, 2023.

Abstract

We initiate the first study of computable online (c-online) learning, which we analyse under varying requirements for “optimality” in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of “optimality” and a generalization of Littlestone’s Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v201-hasrati23a, title = {On Computable Online Learning}, author = {Hasrati, Niki and Ben-David, Shai}, booktitle = {Proceedings of The 34th International Conference on Algorithmic Learning Theory}, pages = {707--725}, year = {2023}, editor = {Agrawal, Shipra and Orabona, Francesco}, volume = {201}, series = {Proceedings of Machine Learning Research}, month = {20 Feb--23 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v201/hasrati23a/hasrati23a.pdf}, url = {https://proceedings.mlr.press/v201/hasrati23a.html}, abstract = {We initiate the first study of computable online (c-online) learning, which we analyse under varying requirements for “optimality” in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of “optimality” and a generalization of Littlestone’s Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.} }
Endnote
%0 Conference Paper %T On Computable Online Learning %A Niki Hasrati %A Shai Ben-David %B Proceedings of The 34th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Shipra Agrawal %E Francesco Orabona %F pmlr-v201-hasrati23a %I PMLR %P 707--725 %U https://proceedings.mlr.press/v201/hasrati23a.html %V 201 %X We initiate the first study of computable online (c-online) learning, which we analyse under varying requirements for “optimality” in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of “optimality” and a generalization of Littlestone’s Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.
APA
Hasrati, N. & Ben-David, S.. (2023). On Computable Online Learning. Proceedings of The 34th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 201:707-725 Available from https://proceedings.mlr.press/v201/hasrati23a.html.

Related Material