A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere

Marco Schmalhofer
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:3285-3295, 2020.

Abstract

We show a simple perceptron-like algorithm to learn origin-centered halfspaces in $\mathbb{R}^n$ with accuracy $1-\epsilon$ and confidence $1-\delta$ in time $\mathcal{O}\left(\frac{n^2}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ using $\mathcal{O}\left(\frac{n}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ labeled examples drawn uniformly from the unit $n$-sphere. This improves upon algorithms given in Baum(1990), Long(1994) and Servedio(1999). The time and sample complexity of our algorithm match the lower bounds given in Long(1995) up to logarithmic factors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-schmalhofer20a, title = {A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere}, author = {Schmalhofer, Marco}, pages = {3285--3295}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/schmalhofer20a/schmalhofer20a.pdf}, url = {http://proceedings.mlr.press/v125/schmalhofer20a.html}, abstract = { We show a simple perceptron-like algorithm to learn origin-centered halfspaces in $\mathbb{R}^n$ with accuracy $1-\epsilon$ and confidence $1-\delta$ in time $\mathcal{O}\left(\frac{n^2}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ using $\mathcal{O}\left(\frac{n}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ labeled examples drawn uniformly from the unit $n$-sphere. This improves upon algorithms given in Baum(1990), Long(1994) and Servedio(1999). The time and sample complexity of our algorithm match the lower bounds given in Long(1995) up to logarithmic factors. } }
Endnote
%0 Conference Paper %T A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere %A Marco Schmalhofer %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-schmalhofer20a %I PMLR %J Proceedings of Machine Learning Research %P 3285--3295 %U http://proceedings.mlr.press %V 125 %W PMLR %X We show a simple perceptron-like algorithm to learn origin-centered halfspaces in $\mathbb{R}^n$ with accuracy $1-\epsilon$ and confidence $1-\delta$ in time $\mathcal{O}\left(\frac{n^2}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ using $\mathcal{O}\left(\frac{n}{\epsilon}\left(\log \frac{1}{\epsilon}+\log \frac{1}{\delta}\right)\right)$ labeled examples drawn uniformly from the unit $n$-sphere. This improves upon algorithms given in Baum(1990), Long(1994) and Servedio(1999). The time and sample complexity of our algorithm match the lower bounds given in Long(1995) up to logarithmic factors.
APA
Schmalhofer, M.. (2020). A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:3285-3295

Related Material