New Bounds on Compressive Linear Least Squares Regression

[edit]

Ata Kaban ;
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:448-456, 2014.

Abstract

In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments.

Related Material