Leverage Score Sampling for Faster Accelerated Regression and ERM
[edit]
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:2247, 2020.
Abstract
Given a matrix $\mathbf{A}\in\R^{n\times d}$ and a vector $b\in\R^{d}$, we show how to compute an $\epsilon$approximate solution to the regression problem $ \min_{x\in\R^{d}}\frac{1}{2} \norm{\mathbf{A} xb}_{2}^{2} $ in time $ \widetilde{O} ((n+\sqrt{d\cdot\kappa_{\text{sum}}}) s \log\epsilon^{1}) $ where $\kappa_{\text{sum}}=\tr\left(\mathbf{A}^{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}^{\top}\mathbf{A})$ and $s$ is the maximum number of nonzero entries in a row of $\mathbf{A}$. This improves upon the previous best running time of $ \widetilde{O} ((n+\sqrt{n \cdot\kappa_{\text{sum}}}) s \log\epsilon^{1})$. We achieve our result through an interesting combination of leverage score sampling, proximal point methods, and accelerated coordinate descent methods. Further, we show that our method not only matches the performance of previous methods up to polylogarithmic factors, but further improves whenever leverage scores of rows are small. We also provide a nonlinear generalization of these results that improves the running time for solving a broader class of ERM problems and expands the set of ERM problems provably solvable in nearly linear time.
Related Material


