Efficient Learning of Linear Separators under Bounded Noise

Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner
Proceedings of The 28th Conference on Learning Theory, PMLR 40:167-190, 2015.

Abstract

We study the learnability of linear separators in \Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example x with probability η(x) ≤η. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit sphere in \Re^d, for some constant value of η. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work, instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only logarithmic in the desired excess error ε.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Awasthi15b, title = {Efficient Learning of Linear Separators under Bounded Noise}, author = {Awasthi, Pranjal and Balcan, Maria-Florina and Haghtalab, Nika and Urner, Ruth}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {167--190}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Awasthi15b.pdf}, url = {https://proceedings.mlr.press/v40/Awasthi15b.html}, abstract = {We study the learnability of linear separators in \Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example x with probability η(x) ≤η. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit sphere in \Re^d, for some constant value of η. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work, instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only logarithmic in the desired excess error ε. } }
Endnote
%0 Conference Paper %T Efficient Learning of Linear Separators under Bounded Noise %A Pranjal Awasthi %A Maria-Florina Balcan %A Nika Haghtalab %A Ruth Urner %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Awasthi15b %I PMLR %P 167--190 %U https://proceedings.mlr.press/v40/Awasthi15b.html %V 40 %X We study the learnability of linear separators in \Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example x with probability η(x) ≤η. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit sphere in \Re^d, for some constant value of η. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work, instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only logarithmic in the desired excess error ε.
RIS
TY - CPAPER TI - Efficient Learning of Linear Separators under Bounded Noise AU - Pranjal Awasthi AU - Maria-Florina Balcan AU - Nika Haghtalab AU - Ruth Urner BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Awasthi15b PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 167 EP - 190 L1 - http://proceedings.mlr.press/v40/Awasthi15b.pdf UR - https://proceedings.mlr.press/v40/Awasthi15b.html AB - We study the learnability of linear separators in \Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example x with probability η(x) ≤η. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit sphere in \Re^d, for some constant value of η. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work, instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only logarithmic in the desired excess error ε. ER -
APA
Awasthi, P., Balcan, M., Haghtalab, N. & Urner, R.. (2015). Efficient Learning of Linear Separators under Bounded Noise. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:167-190 Available from https://proceedings.mlr.press/v40/Awasthi15b.html.

Related Material