Computationally and Statistically Efficient Truncated Regression

Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:955-960, 2019.

Abstract

We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = \vec{w}^{\rm T} \vec{x}+{\varepsilon}$ and its corresponding vector of covariates $\vec{x} \in \mathbb{R}^k$ are only revealed if the dependent variable falls in some subset $S \subseteq \mathbb{R}$; otherwise the existence of the pair $(\vec{x},y)$ is hidden. This problem has remained a challenge since the early works of Tobin 1958, Amemiya et al. 1973, HAusman et al 1977, Breen et al. 1996, its applications are abundant, and its history dates back even further to the work of Galton 1897, Pearson 1908, Lee 1915, and Fisher 1931. While consistent estimators of the regression coefficients have been identified, the error rates are not well-understood, especially in high-dimensional settings. Under a “thickness assumption” about the covariance matrix of the covariates in the revealed sample, we provide a computationally efficient estimator for the coefficient vector $\vec{w}$ from $n$ revealed samples that attains $\ell_2$ error $\tilde{O}(\sqrt{k / n})$, almost recovering the guarantees of least squares in the standard (untruncated) linear regression setting. Our estimator uses Projected Stochastic Gradient Descent (PSGD) on the negative log-likelihood of the truncated sample, and only needs oracle access to the set $S$, which may otherwise be arbitrary, and in particular may be non-convex. PSGD must be restricted to an appropriately defined convex cone to guarantee that the negative log-likelihood is strongly convex, which in turn is established using concentration of matrices on variables with sub-exponential tails. We perform experiments on simulated data to illustrate the accuracy of our estimator. As a corollary of our work, we show that SGD provably learns the parameters of single-layer neural networks with noisy Relu activation functions (see Nair and Hinton 2010, Bengio et al. 2013, Gulcehre et al. 2016), given linearly many, in the number of network parameters, input-output pairs in the realizable setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-daskalakis19a, title = {Computationally and Statistically Efficient Truncated Regression}, author = {Daskalakis, Constantinos and Gouleakis, Themis and Tzamos, Christos and Zampetakis, Manolis}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {955--960}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/daskalakis19a/daskalakis19a.pdf}, url = {https://proceedings.mlr.press/v99/daskalakis19a.html}, abstract = { We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = \vec{w}^{\rm T} \vec{x}+{\varepsilon}$ and its corresponding vector of covariates $\vec{x} \in \mathbb{R}^k$ are only revealed if the dependent variable falls in some subset $S \subseteq \mathbb{R}$; otherwise the existence of the pair $(\vec{x},y)$ is hidden. This problem has remained a challenge since the early works of Tobin 1958, Amemiya et al. 1973, HAusman et al 1977, Breen et al. 1996, its applications are abundant, and its history dates back even further to the work of Galton 1897, Pearson 1908, Lee 1915, and Fisher 1931. While consistent estimators of the regression coefficients have been identified, the error rates are not well-understood, especially in high-dimensional settings. Under a “thickness assumption” about the covariance matrix of the covariates in the revealed sample, we provide a computationally efficient estimator for the coefficient vector $\vec{w}$ from $n$ revealed samples that attains $\ell_2$ error $\tilde{O}(\sqrt{k / n})$, almost recovering the guarantees of least squares in the standard (untruncated) linear regression setting. Our estimator uses Projected Stochastic Gradient Descent (PSGD) on the negative log-likelihood of the truncated sample, and only needs oracle access to the set $S$, which may otherwise be arbitrary, and in particular may be non-convex. PSGD must be restricted to an appropriately defined convex cone to guarantee that the negative log-likelihood is strongly convex, which in turn is established using concentration of matrices on variables with sub-exponential tails. We perform experiments on simulated data to illustrate the accuracy of our estimator. As a corollary of our work, we show that SGD provably learns the parameters of single-layer neural networks with noisy Relu activation functions (see Nair and Hinton 2010, Bengio et al. 2013, Gulcehre et al. 2016), given linearly many, in the number of network parameters, input-output pairs in the realizable setting.} }
Endnote
%0 Conference Paper %T Computationally and Statistically Efficient Truncated Regression %A Constantinos Daskalakis %A Themis Gouleakis %A Christos Tzamos %A Manolis Zampetakis %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-daskalakis19a %I PMLR %P 955--960 %U https://proceedings.mlr.press/v99/daskalakis19a.html %V 99 %X We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = \vec{w}^{\rm T} \vec{x}+{\varepsilon}$ and its corresponding vector of covariates $\vec{x} \in \mathbb{R}^k$ are only revealed if the dependent variable falls in some subset $S \subseteq \mathbb{R}$; otherwise the existence of the pair $(\vec{x},y)$ is hidden. This problem has remained a challenge since the early works of Tobin 1958, Amemiya et al. 1973, HAusman et al 1977, Breen et al. 1996, its applications are abundant, and its history dates back even further to the work of Galton 1897, Pearson 1908, Lee 1915, and Fisher 1931. While consistent estimators of the regression coefficients have been identified, the error rates are not well-understood, especially in high-dimensional settings. Under a “thickness assumption” about the covariance matrix of the covariates in the revealed sample, we provide a computationally efficient estimator for the coefficient vector $\vec{w}$ from $n$ revealed samples that attains $\ell_2$ error $\tilde{O}(\sqrt{k / n})$, almost recovering the guarantees of least squares in the standard (untruncated) linear regression setting. Our estimator uses Projected Stochastic Gradient Descent (PSGD) on the negative log-likelihood of the truncated sample, and only needs oracle access to the set $S$, which may otherwise be arbitrary, and in particular may be non-convex. PSGD must be restricted to an appropriately defined convex cone to guarantee that the negative log-likelihood is strongly convex, which in turn is established using concentration of matrices on variables with sub-exponential tails. We perform experiments on simulated data to illustrate the accuracy of our estimator. As a corollary of our work, we show that SGD provably learns the parameters of single-layer neural networks with noisy Relu activation functions (see Nair and Hinton 2010, Bengio et al. 2013, Gulcehre et al. 2016), given linearly many, in the number of network parameters, input-output pairs in the realizable setting.
APA
Daskalakis, C., Gouleakis, T., Tzamos, C. & Zampetakis, M.. (2019). Computationally and Statistically Efficient Truncated Regression. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:955-960 Available from https://proceedings.mlr.press/v99/daskalakis19a.html.

Related Material