Strong Memory Lower Bounds for Learning Natural Models

Gavin Brown, Mark Bun, Adam Smith
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4989-5029, 2022.

Abstract

We give lower bounds on the amount of memory required by a one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $\kappa$ bits, we show that algorithms which learn to constant error using a near-minimal number of examples, $\tilde O(\kappa)$, must use $\tilde \Omega( d\kappa)$ bits of space. Our space bounds match the dimension of the ambient space of the problem’s natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $\kappa=\Theta(d\log d)$, our space lower bound is $\tilde\Omega(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tilde\Omega(d\kappa \cdot \frac{\kappa}{N})$. Bounds of the form $\Omega(d\kappa)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for for a large range of input sizes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-brown22a, title = {Strong Memory Lower Bounds for Learning Natural Models}, author = {Brown, Gavin and Bun, Mark and Smith, Adam}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {4989--5029}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/brown22a/brown22a.pdf}, url = {https://proceedings.mlr.press/v178/brown22a.html}, abstract = {We give lower bounds on the amount of memory required by a one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $\kappa$ bits, we show that algorithms which learn to constant error using a near-minimal number of examples, $\tilde O(\kappa)$, must use $\tilde \Omega( d\kappa)$ bits of space. Our space bounds match the dimension of the ambient space of the problem’s natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $\kappa=\Theta(d\log d)$, our space lower bound is $\tilde\Omega(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tilde\Omega(d\kappa \cdot \frac{\kappa}{N})$. Bounds of the form $\Omega(d\kappa)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for for a large range of input sizes.} }
Endnote
%0 Conference Paper %T Strong Memory Lower Bounds for Learning Natural Models %A Gavin Brown %A Mark Bun %A Adam Smith %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-brown22a %I PMLR %P 4989--5029 %U https://proceedings.mlr.press/v178/brown22a.html %V 178 %X We give lower bounds on the amount of memory required by a one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $\kappa$ bits, we show that algorithms which learn to constant error using a near-minimal number of examples, $\tilde O(\kappa)$, must use $\tilde \Omega( d\kappa)$ bits of space. Our space bounds match the dimension of the ambient space of the problem’s natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $\kappa=\Theta(d\log d)$, our space lower bound is $\tilde\Omega(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tilde\Omega(d\kappa \cdot \frac{\kappa}{N})$. Bounds of the form $\Omega(d\kappa)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for for a large range of input sizes.
APA
Brown, G., Bun, M. & Smith, A.. (2022). Strong Memory Lower Bounds for Learning Natural Models. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:4989-5029 Available from https://proceedings.mlr.press/v178/brown22a.html.

Related Material