The Gradient Complexity of Linear Regression

Mark Braverman, Elad Hazan, Max Simchowitz, Blake Woodworth
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:627-647, 2020.

Abstract

We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, $\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm. Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-braverman20a, title = {The Gradient Complexity of Linear Regression}, author = {Braverman, Mark and Hazan, Elad and Simchowitz, Max and Woodworth, Blake}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {627--647}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/braverman20a/braverman20a.pdf}, url = {https://proceedings.mlr.press/v125/braverman20a.html}, abstract = { We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, $\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm. Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble.} }
Endnote
%0 Conference Paper %T The Gradient Complexity of Linear Regression %A Mark Braverman %A Elad Hazan %A Max Simchowitz %A Blake Woodworth %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-braverman20a %I PMLR %P 627--647 %U https://proceedings.mlr.press/v125/braverman20a.html %V 125 %X We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, $\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm. Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble.
APA
Braverman, M., Hazan, E., Simchowitz, M. & Woodworth, B.. (2020). The Gradient Complexity of Linear Regression. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:627-647 Available from https://proceedings.mlr.press/v125/braverman20a.html.

Related Material