Data sparse nonparametric regression with $ε$-insensitive losses

Maxime Sangnier, Olivier Fercoq, Florence d’Alché-Buc
Proceedings of the Ninth Asian Conference on Machine Learning, PMLR 77:192-207, 2017.

Abstract

Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of $ε$-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with $ε$-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v77-sangnier17a, title = {Data sparse nonparametric regression with $ε$-insensitive losses}, author = {Sangnier, Maxime and Fercoq, Olivier and d’Alché-Buc, Florence}, booktitle = {Proceedings of the Ninth Asian Conference on Machine Learning}, pages = {192--207}, year = {2017}, editor = {Zhang, Min-Ling and Noh, Yung-Kyun}, volume = {77}, series = {Proceedings of Machine Learning Research}, address = {Yonsei University, Seoul, Republic of Korea}, month = {15--17 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v77/sangnier17a/sangnier17a.pdf}, url = {https://proceedings.mlr.press/v77/sangnier17a.html}, abstract = {Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of $ε$-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with $ε$-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.} }
Endnote
%0 Conference Paper %T Data sparse nonparametric regression with $ε$-insensitive losses %A Maxime Sangnier %A Olivier Fercoq %A Florence d’Alché-Buc %B Proceedings of the Ninth Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Min-Ling Zhang %E Yung-Kyun Noh %F pmlr-v77-sangnier17a %I PMLR %P 192--207 %U https://proceedings.mlr.press/v77/sangnier17a.html %V 77 %X Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of $ε$-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with $ε$-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.
APA
Sangnier, M., Fercoq, O. & d’Alché-Buc, F.. (2017). Data sparse nonparametric regression with $ε$-insensitive losses. Proceedings of the Ninth Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 77:192-207 Available from https://proceedings.mlr.press/v77/sangnier17a.html.

Related Material