Efficient Co-Training of Linear Separators under Weak Dependence

Avrim Blum, Yishay Mansour
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:302-318, 2017.

Abstract

We develop the first polynomial-time algorithm for co-training of homogeneous linear separators under \em weak dependence, a relaxation of the condition of independence given the label. Our algorithm learns from purely unlabeled data, except for a single labeled example to break symmetry of the two classes, and works for any data distribution having an inverse-polynomial margin and with center of mass at the origin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-blum17a, title = {Efficient Co-Training of Linear Separators under Weak Dependence}, author = {Blum, Avrim and Mansour, Yishay}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {302--318}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/blum17a/blum17a.pdf}, url = {https://proceedings.mlr.press/v65/blum17a.html}, abstract = {We develop the first polynomial-time algorithm for co-training of homogeneous linear separators under \em weak dependence, a relaxation of the condition of independence given the label. Our algorithm learns from purely unlabeled data, except for a single labeled example to break symmetry of the two classes, and works for any data distribution having an inverse-polynomial margin and with center of mass at the origin.} }
Endnote
%0 Conference Paper %T Efficient Co-Training of Linear Separators under Weak Dependence %A Avrim Blum %A Yishay Mansour %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-blum17a %I PMLR %P 302--318 %U https://proceedings.mlr.press/v65/blum17a.html %V 65 %X We develop the first polynomial-time algorithm for co-training of homogeneous linear separators under \em weak dependence, a relaxation of the condition of independence given the label. Our algorithm learns from purely unlabeled data, except for a single labeled example to break symmetry of the two classes, and works for any data distribution having an inverse-polynomial margin and with center of mass at the origin.
APA
Blum, A. & Mansour, Y.. (2017). Efficient Co-Training of Linear Separators under Weak Dependence. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:302-318 Available from https://proceedings.mlr.press/v65/blum17a.html.

Related Material