On the Complexity of Proper Distribution-Free Learning of Linear Classifiers

Philip M. Long, Raphael J. Long
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:583-591, 2020.

Abstract

For proper distribution-free learning of linear classifiers in $d$ dimensions from $m$ examples, we prove a lower bound on the optimal expected error of $\frac{d - o(1)}{m}$, improving on the best previous lower bound of $\frac{d/\sqrt{e} - o(1)}{m}$, and nearly matching a $\frac{d+1}{m+1}$ upper bound achieved by the linear support vector machine.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-long20a, title = {On the Complexity of Proper Distribution-Free Learning of Linear Classifiers}, author = {Long, Philip M. and Long, Raphael J.}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {583--591}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/long20a/long20a.pdf}, url = {https://proceedings.mlr.press/v117/long20a.html}, abstract = {For proper distribution-free learning of linear classifiers in $d$ dimensions from $m$ examples, we prove a lower bound on the optimal expected error of $\frac{d - o(1)}{m}$, improving on the best previous lower bound of $\frac{d/\sqrt{e} - o(1)}{m}$, and nearly matching a $\frac{d+1}{m+1}$ upper bound achieved by the linear support vector machine.} }
Endnote
%0 Conference Paper %T On the Complexity of Proper Distribution-Free Learning of Linear Classifiers %A Philip M. Long %A Raphael J. Long %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-long20a %I PMLR %P 583--591 %U https://proceedings.mlr.press/v117/long20a.html %V 117 %X For proper distribution-free learning of linear classifiers in $d$ dimensions from $m$ examples, we prove a lower bound on the optimal expected error of $\frac{d - o(1)}{m}$, improving on the best previous lower bound of $\frac{d/\sqrt{e} - o(1)}{m}$, and nearly matching a $\frac{d+1}{m+1}$ upper bound achieved by the linear support vector machine.
APA
Long, P.M. & Long, R.J.. (2020). On the Complexity of Proper Distribution-Free Learning of Linear Classifiers. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:583-591 Available from https://proceedings.mlr.press/v117/long20a.html.

Related Material