Active Learning with Simple Questions

Kontonis Vasilis, Ma Mingchen, Tzamos Christos
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3064-3098, 2024.

Abstract

We consider an active learning setting where a learner is presented with a pool $S$ of $n$ unlabeled examples belonging to a domain $\mathcal X$ and asks queries to find the underlying labeling that agrees with a target concept $h^\ast \in \mathcal H$. In contrast to traditional active learning that queries a single example for its label, we study more general \emph{region queries} that allow the learner to pick a subset of the domain $T \subset \mathcal X$ and a target label $y$ and ask a labeler whether $h^\ast(x) = y $ for every example in the set $T \cap S$. Such more powerful queries allow us to bypass the limitations of traditional active learning and use significantly fewer rounds of interactions to learn but can potentially lead to a significantly more complex query language. Our main contribution is quantifying the trade-off between the number of queries and the complexity of the query language used by the learner. We measure the complexity of the region queries via the VC dimension of the family of regions. We show that given any hypothesis class $\H$ with VC dimension $d$, one can design a region query family $Q$ with VC dimension $6d$ such that for every set of $n$ examples $S \subset \X$ and every $h^* \in \H$, a learner can submit $O(d\log n)$ queries from $Q$ to a labeler and perfectly label $S$. We show a matching lower bound by designing a hypothesis class $\H$ with VC dimension $d$ and a dataset $S \subset \X$ of size $n$ such that any learning algorithm using any query class with VC dimension $(d-2)/3$ must make $\poly(n)$ queries to label $S$ perfectly. Finally, we focus on well-studied hypothesis classes including unions of intervals, high-dimensional boxes, and $d$-dimensional halfspaces, and obtain stronger results. In particular, we design learning algorithms that (i) are computationally efficient and (ii) work even when the queries are not answered based on the learner’s pool of examples $S$ but on some unknown superset $L$ of $S$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-vasilis24a, title = {Active Learning with Simple Questions}, author = {Vasilis, Kontonis and Mingchen, Ma and Christos, Tzamos}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3064--3098}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/vasilis24a/vasilis24a.pdf}, url = {https://proceedings.mlr.press/v247/vasilis24a.html}, abstract = {We consider an active learning setting where a learner is presented with a pool $S$ of $n$ unlabeled examples belonging to a domain $\mathcal X$ and asks queries to find the underlying labeling that agrees with a target concept $h^\ast \in \mathcal H$. In contrast to traditional active learning that queries a single example for its label, we study more general \emph{region queries} that allow the learner to pick a subset of the domain $T \subset \mathcal X$ and a target label $y$ and ask a labeler whether $h^\ast(x) = y $ for every example in the set $T \cap S$. Such more powerful queries allow us to bypass the limitations of traditional active learning and use significantly fewer rounds of interactions to learn but can potentially lead to a significantly more complex query language. Our main contribution is quantifying the trade-off between the number of queries and the complexity of the query language used by the learner. We measure the complexity of the region queries via the VC dimension of the family of regions. We show that given any hypothesis class $\H$ with VC dimension $d$, one can design a region query family $Q$ with VC dimension $6d$ such that for every set of $n$ examples $S \subset \X$ and every $h^* \in \H$, a learner can submit $O(d\log n)$ queries from $Q$ to a labeler and perfectly label $S$. We show a matching lower bound by designing a hypothesis class $\H$ with VC dimension $d$ and a dataset $S \subset \X$ of size $n$ such that any learning algorithm using any query class with VC dimension $(d-2)/3$ must make $\poly(n)$ queries to label $S$ perfectly. Finally, we focus on well-studied hypothesis classes including unions of intervals, high-dimensional boxes, and $d$-dimensional halfspaces, and obtain stronger results. In particular, we design learning algorithms that (i) are computationally efficient and (ii) work even when the queries are not answered based on the learner’s pool of examples $S$ but on some unknown superset $L$ of $S$. } }
Endnote
%0 Conference Paper %T Active Learning with Simple Questions %A Kontonis Vasilis %A Ma Mingchen %A Tzamos Christos %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-vasilis24a %I PMLR %P 3064--3098 %U https://proceedings.mlr.press/v247/vasilis24a.html %V 247 %X We consider an active learning setting where a learner is presented with a pool $S$ of $n$ unlabeled examples belonging to a domain $\mathcal X$ and asks queries to find the underlying labeling that agrees with a target concept $h^\ast \in \mathcal H$. In contrast to traditional active learning that queries a single example for its label, we study more general \emph{region queries} that allow the learner to pick a subset of the domain $T \subset \mathcal X$ and a target label $y$ and ask a labeler whether $h^\ast(x) = y $ for every example in the set $T \cap S$. Such more powerful queries allow us to bypass the limitations of traditional active learning and use significantly fewer rounds of interactions to learn but can potentially lead to a significantly more complex query language. Our main contribution is quantifying the trade-off between the number of queries and the complexity of the query language used by the learner. We measure the complexity of the region queries via the VC dimension of the family of regions. We show that given any hypothesis class $\H$ with VC dimension $d$, one can design a region query family $Q$ with VC dimension $6d$ such that for every set of $n$ examples $S \subset \X$ and every $h^* \in \H$, a learner can submit $O(d\log n)$ queries from $Q$ to a labeler and perfectly label $S$. We show a matching lower bound by designing a hypothesis class $\H$ with VC dimension $d$ and a dataset $S \subset \X$ of size $n$ such that any learning algorithm using any query class with VC dimension $(d-2)/3$ must make $\poly(n)$ queries to label $S$ perfectly. Finally, we focus on well-studied hypothesis classes including unions of intervals, high-dimensional boxes, and $d$-dimensional halfspaces, and obtain stronger results. In particular, we design learning algorithms that (i) are computationally efficient and (ii) work even when the queries are not answered based on the learner’s pool of examples $S$ but on some unknown superset $L$ of $S$.
APA
Vasilis, K., Mingchen, M. & Christos, T.. (2024). Active Learning with Simple Questions. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3064-3098 Available from https://proceedings.mlr.press/v247/vasilis24a.html.

Related Material