[edit]
Statistical and Algorithmic Perspectives on Randomized Sketching for Ordinary Least-Squares
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:617-625, 2015.
Abstract
We consider statistical and algorithmic aspects of solving large-scale least-squares (LS) problems using randomized sketching algorithms. Prior results show that, from an \emphalgorithmic perspective, when using sketching matrices constructed from random projections and leverage-score sampling, if the number of samples r much smaller than the original sample size n, then the worst-case (WC) error is the same as solving the original problem, up to a very small relative error. From a \emphstatistical perspective, one typically considers the mean-squared error performance of randomized sketching algorithms, when data are generated according to a statistical linear model. In this paper, we provide a rigorous comparison of both perspectives leading to insights on how they differ. To do this, we first develop a framework for assessing, in a unified manner, algorithmic and statistical aspects of randomized sketching methods. We then consider the statistical prediction efficiency (PE) and the statistical residual efficiency (RE) of the sketched LS estimator; and we use our framework to provide upper bounds for several types of random projection and random sampling algorithms. Among other results, we show that the RE can be upper bounded when r is much smaller than n, while the PE typically requires the number of samples r to be substantially larger. Lower bounds developed in subsequent work show that our upper bounds on PE can not be improved.