On the well-spread property and its relation to linear regression

Hongjie Chen, Tommaso d’Orsi
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3905-3935, 2022.

Abstract

We consider the robust linear regression model $\bm{y} = X\beta^* + \bm{\eta}$, where an adversary oblivious to the design $X \in \R^{n \times d}$ may choose $\bm{\eta}$ to corrupt all but a (possibly vanishing) fraction of the observations $\bm{y}$ in an arbitrary way. Recent work \cite{d2021consistent, d2021consistentICML} has introduced efficient algorithms for consistent recovery of the parameter vector. These algorithms crucially rely on the design matrix being well-spread (a matrix is well-spread if its column span is far from any sparse vector). In this paper, we show that there exists a family of design matrices lacking well-spreadness such that consistent recovery of the parameter vector in the above robust linear regression model is information-theoretically impossible. We further investigate the average-case time complexity of certifying well-spreadness of random matrices. We show that it is possible to efficiently certify whether a given $n$-by-$d$ Gaussian matrix is well-spread if the number of observations is quadratic in the ambient dimension. We complement this result by showing rigorous evidence —in the form of a lower bound against low-degree polynomials— of the computational hardness of this same certification problem when the number of observations is $o(d^2)$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-chen22d, title = {On the well-spread property and its relation to linear regression}, author = {Chen, Hongjie and d'Orsi, Tommaso}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {3905--3935}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/chen22d/chen22d.pdf}, url = {https://proceedings.mlr.press/v178/chen22d.html}, abstract = {We consider the robust linear regression model $\bm{y} = X\beta^* + \bm{\eta}$, where an adversary oblivious to the design $X \in \R^{n \times d}$ may choose $\bm{\eta}$ to corrupt all but a (possibly vanishing) fraction of the observations $\bm{y}$ in an arbitrary way. Recent work \cite{d2021consistent, d2021consistentICML} has introduced efficient algorithms for consistent recovery of the parameter vector. These algorithms crucially rely on the design matrix being well-spread (a matrix is well-spread if its column span is far from any sparse vector). In this paper, we show that there exists a family of design matrices lacking well-spreadness such that consistent recovery of the parameter vector in the above robust linear regression model is information-theoretically impossible. We further investigate the average-case time complexity of certifying well-spreadness of random matrices. We show that it is possible to efficiently certify whether a given $n$-by-$d$ Gaussian matrix is well-spread if the number of observations is quadratic in the ambient dimension. We complement this result by showing rigorous evidence —in the form of a lower bound against low-degree polynomials— of the computational hardness of this same certification problem when the number of observations is $o(d^2)$.} }
Endnote
%0 Conference Paper %T On the well-spread property and its relation to linear regression %A Hongjie Chen %A Tommaso d’Orsi %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-chen22d %I PMLR %P 3905--3935 %U https://proceedings.mlr.press/v178/chen22d.html %V 178 %X We consider the robust linear regression model $\bm{y} = X\beta^* + \bm{\eta}$, where an adversary oblivious to the design $X \in \R^{n \times d}$ may choose $\bm{\eta}$ to corrupt all but a (possibly vanishing) fraction of the observations $\bm{y}$ in an arbitrary way. Recent work \cite{d2021consistent, d2021consistentICML} has introduced efficient algorithms for consistent recovery of the parameter vector. These algorithms crucially rely on the design matrix being well-spread (a matrix is well-spread if its column span is far from any sparse vector). In this paper, we show that there exists a family of design matrices lacking well-spreadness such that consistent recovery of the parameter vector in the above robust linear regression model is information-theoretically impossible. We further investigate the average-case time complexity of certifying well-spreadness of random matrices. We show that it is possible to efficiently certify whether a given $n$-by-$d$ Gaussian matrix is well-spread if the number of observations is quadratic in the ambient dimension. We complement this result by showing rigorous evidence —in the form of a lower bound against low-degree polynomials— of the computational hardness of this same certification problem when the number of observations is $o(d^2)$.
APA
Chen, H. & d’Orsi, T.. (2022). On the well-spread property and its relation to linear regression. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:3905-3935 Available from https://proceedings.mlr.press/v178/chen22d.html.

Related Material