The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions

Chicheng Zhang, Kamalika Chaudhuri
29th Annual Conference on Learning Theory, PMLR 49:1584-1616, 2016.

Abstract

This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class \mathcalH, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone’s Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-zhang16a, title = {The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions}, author = {Zhang, Chicheng and Chaudhuri, Kamalika}, booktitle = {29th Annual Conference on Learning Theory}, pages = {1584--1616}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/zhang16a.pdf}, url = {https://proceedings.mlr.press/v49/zhang16a.html}, abstract = {This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class \mathcalH, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone’s Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes.} }
Endnote
%0 Conference Paper %T The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions %A Chicheng Zhang %A Kamalika Chaudhuri %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-zhang16a %I PMLR %P 1584--1616 %U https://proceedings.mlr.press/v49/zhang16a.html %V 49 %X This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class \mathcalH, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone’s Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes.
RIS
TY - CPAPER TI - The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions AU - Chicheng Zhang AU - Kamalika Chaudhuri BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-zhang16a PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 1584 EP - 1616 L1 - http://proceedings.mlr.press/v49/zhang16a.pdf UR - https://proceedings.mlr.press/v49/zhang16a.html AB - This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class \mathcalH, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label. Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, about what an optimal algorithmic strategy is for any individual hypothesis class. Second, while the realizable case has been studied, the more realistic non-realizable scenario is not well-understood. In this paper, we address both challenges. First, we provide a novel measure, called the Extended Littlestone’s Dimension, which captures the number of abstentions needed to ensure a certain number of mistakes. Second, we explore the non-realizable case, and provide upper and lower bounds on the number of abstentions required by an algorithm to guarantee a specified number of mistakes. ER -
APA
Zhang, C. & Chaudhuri, K.. (2016). The Extended Littlestone’s Dimension for Learning with Mistakes and Abstentions. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:1584-1616 Available from https://proceedings.mlr.press/v49/zhang16a.html.

Related Material