A Regularization Approach to Nonlinear Variable Selection

Lorenzo Rosasco, Matteo Santoro, Sofia Mosci, Alessandro Verri, Silvia Villa
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:653-660, 2010.

Abstract

In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem, and its convergence is proved. The empirical properties of the obtained estimator are tested both for prediction and variable selection. The algorithm compares favorably to more standard ridge regression and L1 regularization schemes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-rosasco10a, title = {A Regularization Approach to Nonlinear Variable Selection}, author = {Rosasco, Lorenzo and Santoro, Matteo and Mosci, Sofia and Verri, Alessandro and Villa, Silvia}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {653--660}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/rosasco10a/rosasco10a.pdf}, url = {https://proceedings.mlr.press/v9/rosasco10a.html}, abstract = {In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem, and its convergence is proved. The empirical properties of the obtained estimator are tested both for prediction and variable selection. The algorithm compares favorably to more standard ridge regression and L1 regularization schemes.} }
Endnote
%0 Conference Paper %T A Regularization Approach to Nonlinear Variable Selection %A Lorenzo Rosasco %A Matteo Santoro %A Sofia Mosci %A Alessandro Verri %A Silvia Villa %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-rosasco10a %I PMLR %P 653--660 %U https://proceedings.mlr.press/v9/rosasco10a.html %V 9 %X In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem, and its convergence is proved. The empirical properties of the obtained estimator are tested both for prediction and variable selection. The algorithm compares favorably to more standard ridge regression and L1 regularization schemes.
RIS
TY - CPAPER TI - A Regularization Approach to Nonlinear Variable Selection AU - Lorenzo Rosasco AU - Matteo Santoro AU - Sofia Mosci AU - Alessandro Verri AU - Silvia Villa BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-rosasco10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 653 EP - 660 L1 - http://proceedings.mlr.press/v9/rosasco10a/rosasco10a.pdf UR - https://proceedings.mlr.press/v9/rosasco10a.html AB - In this paper we consider a regularization approach to variable selection when the regression function depends nonlinearly on a few input variables. The proposed method is based on a regularized least square estimator penalizing large values of the partial derivatives. An efficient iterative procedure is proposed to solve the underlying variational problem, and its convergence is proved. The empirical properties of the obtained estimator are tested both for prediction and variable selection. The algorithm compares favorably to more standard ridge regression and L1 regularization schemes. ER -
APA
Rosasco, L., Santoro, M., Mosci, S., Verri, A. & Villa, S.. (2010). A Regularization Approach to Nonlinear Variable Selection. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:653-660 Available from https://proceedings.mlr.press/v9/rosasco10a.html.

Related Material