Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach

Mao Ye, Yan Sun
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5620-5629, 2018.

Abstract

We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables. The proposed method approximates this complex system using a penalized neural network and selects explanatory variables by measuring their utility in explaining the variance of the response variable. This measurement is based on a novel statistic called Drop-Out-One Loss. The proposed method also allows (overlapping) group variable selection. We prove that the proposed method can select relevant variables and exclude irrelevant variables with probability one as the sample size goes to infinity, which is referred to as the Oracle Property. Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-ye18b, title = {Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach}, author = {Ye, Mao and Sun, Yan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5620--5629}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/ye18b/ye18b.pdf}, url = {http://proceedings.mlr.press/v80/ye18b.html}, abstract = {We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables. The proposed method approximates this complex system using a penalized neural network and selects explanatory variables by measuring their utility in explaining the variance of the response variable. This measurement is based on a novel statistic called Drop-Out-One Loss. The proposed method also allows (overlapping) group variable selection. We prove that the proposed method can select relevant variables and exclude irrelevant variables with probability one as the sample size goes to infinity, which is referred to as the Oracle Property. Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy.} }
Endnote
%0 Conference Paper %T Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach %A Mao Ye %A Yan Sun %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-ye18b %I PMLR %P 5620--5629 %U http://proceedings.mlr.press/v80/ye18b.html %V 80 %X We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables. The proposed method approximates this complex system using a penalized neural network and selects explanatory variables by measuring their utility in explaining the variance of the response variable. This measurement is based on a novel statistic called Drop-Out-One Loss. The proposed method also allows (overlapping) group variable selection. We prove that the proposed method can select relevant variables and exclude irrelevant variables with probability one as the sample size goes to infinity, which is referred to as the Oracle Property. Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy.
APA
Ye, M. & Sun, Y.. (2018). Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5620-5629 Available from http://proceedings.mlr.press/v80/ye18b.html.

Related Material