New Bounds on Compressive Linear Least Squares Regression

Ata Kaban
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:448-456, 2014.

Abstract

In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-kaban14, title = {{New Bounds on Compressive Linear Least Squares Regression}}, author = {Kaban, Ata}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {448--456}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/kaban14.pdf}, url = {https://proceedings.mlr.press/v33/kaban14.html}, abstract = {In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments.} }
Endnote
%0 Conference Paper %T New Bounds on Compressive Linear Least Squares Regression %A Ata Kaban %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-kaban14 %I PMLR %P 448--456 %U https://proceedings.mlr.press/v33/kaban14.html %V 33 %X In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments.
RIS
TY - CPAPER TI - New Bounds on Compressive Linear Least Squares Regression AU - Ata Kaban BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-kaban14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 448 EP - 456 L1 - http://proceedings.mlr.press/v33/kaban14.pdf UR - https://proceedings.mlr.press/v33/kaban14.html AB - In this paper we provide a new analysis of compressive least squares regression that removes a spurious log N factor from previous bounds, where N is the number of training points. Our new bound has a clear interpretation and reveals meaningful structural properties of the linear regression problem that makes it solvable effectively in a small dimensional random subspace. In addition, the main part of our analysis does not require the compressive matrix to have the Johnson-Lindenstrauss property, or the RIP property. Instead, we only require its entries to be drawn i.i.d. from a 0-mean symmetric distribution with finite first four moments. ER -
APA
Kaban, A.. (2014). New Bounds on Compressive Linear Least Squares Regression. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:448-456 Available from https://proceedings.mlr.press/v33/kaban14.html.

Related Material