Data sparse nonparametric regression with $ε$-insensitive losses


Maxime Sangnier, Olivier Fercoq, Florence d’Alché-Buc ;
Proceedings of the Ninth Asian Conference on Machine Learning, PMLR 77:192-207, 2017.


Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of $ε$-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with $ε$-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.

Related Material