Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5228-5237, 2018.

Abstract

We study the parameter tuning problem for the penalized regression model. Finding the optimal choice of the regularization parameter is a challenging problem in high-dimensional regimes where both the number of observations n and the number of parameters p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of the penalized regression model. We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization and support vector machines. We experimentally demonstrate the effectiveness of our results for non-differentiable cases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wang18m, title = {Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions}, author = {Wang, Shuaiwen and Zhou, Wenda and Lu, Haihao and Maleki, Arian and Mirrokni, Vahab}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5228--5237}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wang18m/wang18m.pdf}, url = {https://proceedings.mlr.press/v80/wang18m.html}, abstract = {We study the parameter tuning problem for the penalized regression model. Finding the optimal choice of the regularization parameter is a challenging problem in high-dimensional regimes where both the number of observations n and the number of parameters p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of the penalized regression model. We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization and support vector machines. We experimentally demonstrate the effectiveness of our results for non-differentiable cases.} }
Endnote
%0 Conference Paper %T Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions %A Shuaiwen Wang %A Wenda Zhou %A Haihao Lu %A Arian Maleki %A Vahab Mirrokni %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wang18m %I PMLR %P 5228--5237 %U https://proceedings.mlr.press/v80/wang18m.html %V 80 %X We study the parameter tuning problem for the penalized regression model. Finding the optimal choice of the regularization parameter is a challenging problem in high-dimensional regimes where both the number of observations n and the number of parameters p are large. We propose two frameworks to obtain a computationally efficient approximation ALO of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our two frameworks are based on the primal and dual formulations of the penalized regression model. We prove the equivalence of the two approaches under smoothness conditions. This equivalence enables us to justify the accuracy of both methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization and support vector machines. We experimentally demonstrate the effectiveness of our results for non-differentiable cases.
APA
Wang, S., Zhou, W., Lu, H., Maleki, A. & Mirrokni, V.. (2018). Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5228-5237 Available from https://proceedings.mlr.press/v80/wang18m.html.

Related Material