$\ell_1$ Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent

David Durfee, Kevin A. Lai, Saurabh Sawlani
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1626-1656, 2018.

Abstract

We present preconditioned stochastic gradient descent (SGD) algorithms for the $\ell_1$ minimization problem $\min_{\boldsymbol{\mathit{x}}}\|\boldsymbol{\mathit{A}} \boldsymbol{\mathit{x}} - \boldsymbol{\mathit{b}}\|_1$ in the overdetermined case, where there are far more constraints than variables. Specifically, we have $\boldsymbol{\mathit{A}} \in \mathbb{R}^{n \times d}$ for $n \gg d$. Commonly known as the Least Absolute Deviations problem, $\ell_1$ regression can be used to solve many important combinatorial problems, such as minimum cut and shortest path. SGD-based algorithms are appealing for their simplicity and practical efficiency. Our primary insight is that careful preprocessing can yield preconditioned matrices $\tilde{\boldsymbol{\mathit{A}}}$ with strong properties (besides good condition number and low-dimension) that allow for faster convergence of gradient descent. In particular, we precondition using Lewis weights to obtain an isotropic matrix with fewer rows and strong upper bounds on all row norms. We leverage these conditions to find a good initialization, which we use along with recent smoothing reductions and accelerated stochastic gradient descent algorithms to achieve $\epsilon$ relative error in $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + d^{2.5} \epsilon^{-2})$ time with high probability, where $nnz(\boldsymbol{\mathit{A}})$ is the number of non-zeros in $\boldsymbol{\mathit{A}}$. This improves over the previous best result using gradient descent for $\ell_1$ regression. We also match the best known running times for interior point methods in several settings. Finally, we also show that if our original matrix $\boldsymbol{\mathit{A}}$ is approximately isotropic and the row norms are approximately equal, we can give an algorithm that avoids using fast matrix multiplication and obtains a running time of $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + s d^{1.5}\epsilon^{-2} + d^2\epsilon^{-2})$, where $s$ is the maximum number of non-zeros in a row of $\boldsymbol{\mathit{A}}$. In this setting, we beat the best interior point methods for certain parameter regimes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-durfee18a, title = {$\ell_1$ Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent}, author = {Durfee, David and Lai, Kevin A. and Sawlani, Saurabh}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {1626--1656}, year = {2018}, editor = {Bubeck, S├ębastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/durfee18a/durfee18a.pdf}, url = {https://proceedings.mlr.press/v75/durfee18a.html}, abstract = {We present preconditioned stochastic gradient descent (SGD) algorithms for the $\ell_1$ minimization problem $\min_{\boldsymbol{\mathit{x}}}\|\boldsymbol{\mathit{A}} \boldsymbol{\mathit{x}} - \boldsymbol{\mathit{b}}\|_1$ in the overdetermined case, where there are far more constraints than variables. Specifically, we have $\boldsymbol{\mathit{A}} \in \mathbb{R}^{n \times d}$ for $n \gg d$. Commonly known as the Least Absolute Deviations problem, $\ell_1$ regression can be used to solve many important combinatorial problems, such as minimum cut and shortest path. SGD-based algorithms are appealing for their simplicity and practical efficiency. Our primary insight is that careful preprocessing can yield preconditioned matrices $\tilde{\boldsymbol{\mathit{A}}}$ with strong properties (besides good condition number and low-dimension) that allow for faster convergence of gradient descent. In particular, we precondition using Lewis weights to obtain an isotropic matrix with fewer rows and strong upper bounds on all row norms. We leverage these conditions to find a good initialization, which we use along with recent smoothing reductions and accelerated stochastic gradient descent algorithms to achieve $\epsilon$ relative error in $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + d^{2.5} \epsilon^{-2})$ time with high probability, where $nnz(\boldsymbol{\mathit{A}})$ is the number of non-zeros in $\boldsymbol{\mathit{A}}$. This improves over the previous best result using gradient descent for $\ell_1$ regression. We also match the best known running times for interior point methods in several settings. Finally, we also show that if our original matrix $\boldsymbol{\mathit{A}}$ is approximately isotropic and the row norms are approximately equal, we can give an algorithm that avoids using fast matrix multiplication and obtains a running time of $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + s d^{1.5}\epsilon^{-2} + d^2\epsilon^{-2})$, where $s$ is the maximum number of non-zeros in a row of $\boldsymbol{\mathit{A}}$. In this setting, we beat the best interior point methods for certain parameter regimes.} }
Endnote
%0 Conference Paper %T $\ell_1$ Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent %A David Durfee %A Kevin A. Lai %A Saurabh Sawlani %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E S├ębastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-durfee18a %I PMLR %P 1626--1656 %U https://proceedings.mlr.press/v75/durfee18a.html %V 75 %X We present preconditioned stochastic gradient descent (SGD) algorithms for the $\ell_1$ minimization problem $\min_{\boldsymbol{\mathit{x}}}\|\boldsymbol{\mathit{A}} \boldsymbol{\mathit{x}} - \boldsymbol{\mathit{b}}\|_1$ in the overdetermined case, where there are far more constraints than variables. Specifically, we have $\boldsymbol{\mathit{A}} \in \mathbb{R}^{n \times d}$ for $n \gg d$. Commonly known as the Least Absolute Deviations problem, $\ell_1$ regression can be used to solve many important combinatorial problems, such as minimum cut and shortest path. SGD-based algorithms are appealing for their simplicity and practical efficiency. Our primary insight is that careful preprocessing can yield preconditioned matrices $\tilde{\boldsymbol{\mathit{A}}}$ with strong properties (besides good condition number and low-dimension) that allow for faster convergence of gradient descent. In particular, we precondition using Lewis weights to obtain an isotropic matrix with fewer rows and strong upper bounds on all row norms. We leverage these conditions to find a good initialization, which we use along with recent smoothing reductions and accelerated stochastic gradient descent algorithms to achieve $\epsilon$ relative error in $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + d^{2.5} \epsilon^{-2})$ time with high probability, where $nnz(\boldsymbol{\mathit{A}})$ is the number of non-zeros in $\boldsymbol{\mathit{A}}$. This improves over the previous best result using gradient descent for $\ell_1$ regression. We also match the best known running times for interior point methods in several settings. Finally, we also show that if our original matrix $\boldsymbol{\mathit{A}}$ is approximately isotropic and the row norms are approximately equal, we can give an algorithm that avoids using fast matrix multiplication and obtains a running time of $\widetilde{O}(nnz(\boldsymbol{\mathit{A}}) + s d^{1.5}\epsilon^{-2} + d^2\epsilon^{-2})$, where $s$ is the maximum number of non-zeros in a row of $\boldsymbol{\mathit{A}}$. In this setting, we beat the best interior point methods for certain parameter regimes.
APA
Durfee, D., Lai, K.A. & Sawlani, S.. (2018). $\ell_1$ Regression using Lewis Weights Preconditioning and Stochastic Gradient Descent. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:1626-1656 Available from https://proceedings.mlr.press/v75/durfee18a.html.

Related Material