Approximation Schemes for ReLU Regression

Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1452-1485, 2020.

Abstract

We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution. We give the first efficient, constant-factor approximation algorithm for this problem assuming the underlying distribution satisfies some weak concentration and anti-concentration conditions (and includes, for example, all log-concave distributions). This solves the main open problem of Goel et al., who proved hardness results for any exact algorithm for ReLU regression (up to an additive $\epsilon$). Using more sophisticated techniques, we can improve our results and obtain a polynomial-time approximation scheme for any subgaussian distribution. Given the aforementioned hardness results, these guarantees can not be substantially improved. Our main insight is a new characterization of {\em surrogate losses} for nonconvex activations. While prior work had established the existence of convex surrogates for monotone activations, we show that properties of the underlying distribution actually induce strong convexity for the loss, allowing us to relate the global minimum to the activation’s {\em Chow parameters}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-diakonikolas20b, title = {Approximation Schemes for ReLU Regression}, author = {Diakonikolas, Ilias and Goel, Surbhi and Karmalkar, Sushrut and Klivans, Adam R. and Soltanolkotabi, Mahdi}, pages = {1452--1485}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/diakonikolas20b/diakonikolas20b.pdf}, url = {http://proceedings.mlr.press/v125/diakonikolas20b.html}, abstract = { We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution. We give the first efficient, constant-factor approximation algorithm for this problem assuming the underlying distribution satisfies some weak concentration and anti-concentration conditions (and includes, for example, all log-concave distributions). This solves the main open problem of Goel et al., who proved hardness results for any exact algorithm for ReLU regression (up to an additive $\epsilon$). Using more sophisticated techniques, we can improve our results and obtain a polynomial-time approximation scheme for any subgaussian distribution. Given the aforementioned hardness results, these guarantees can not be substantially improved. Our main insight is a new characterization of {\em surrogate losses} for nonconvex activations. While prior work had established the existence of convex surrogates for monotone activations, we show that properties of the underlying distribution actually induce strong convexity for the loss, allowing us to relate the global minimum to the activation’s {\em Chow parameters}. } }
Endnote
%0 Conference Paper %T Approximation Schemes for ReLU Regression %A Ilias Diakonikolas %A Surbhi Goel %A Sushrut Karmalkar %A Adam R. Klivans %A Mahdi Soltanolkotabi %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-diakonikolas20b %I PMLR %J Proceedings of Machine Learning Research %P 1452--1485 %U http://proceedings.mlr.press %V 125 %W PMLR %X We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution. We give the first efficient, constant-factor approximation algorithm for this problem assuming the underlying distribution satisfies some weak concentration and anti-concentration conditions (and includes, for example, all log-concave distributions). This solves the main open problem of Goel et al., who proved hardness results for any exact algorithm for ReLU regression (up to an additive $\epsilon$). Using more sophisticated techniques, we can improve our results and obtain a polynomial-time approximation scheme for any subgaussian distribution. Given the aforementioned hardness results, these guarantees can not be substantially improved. Our main insight is a new characterization of {\em surrogate losses} for nonconvex activations. While prior work had established the existence of convex surrogates for monotone activations, we show that properties of the underlying distribution actually induce strong convexity for the loss, allowing us to relate the global minimum to the activation’s {\em Chow parameters}.
APA
Diakonikolas, I., Goel, S., Karmalkar, S., Klivans, A.R. & Soltanolkotabi, M.. (2020). Approximation Schemes for ReLU Regression. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:1452-1485

Related Material