Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables

Masaaki Takada, Taiji Suzuki, Hironori Fujisawa
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:454-463, 2018.

Abstract

Sparse regularization such as l1 regularization is a quite powerful and widely used strategy for high dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary l1 regularization can select variables correlated with each other, which results in deterioration of not only its generalization error but also interpretability. In this paper, we pro- pose a new regularization method, “Independently Interpretable Lasso” (IILasso). Our proposed regularizer suppresses selecting correlated variables, and thus each active variable independently affects the objective variable in the model. Hence, we can interpret regression coefficients intuitively and also improve the performance by avoiding overfitting. We analyze theoretical property of IILasso and show that the proposed method is much advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of IILasso.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-takada18a, title = {Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables}, author = {Takada, Masaaki and Suzuki, Taiji and Fujisawa, Hironori}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {454--463}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/takada18a/takada18a.pdf}, url = {https://proceedings.mlr.press/v84/takada18a.html}, abstract = {Sparse regularization such as l1 regularization is a quite powerful and widely used strategy for high dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary l1 regularization can select variables correlated with each other, which results in deterioration of not only its generalization error but also interpretability. In this paper, we pro- pose a new regularization method, “Independently Interpretable Lasso” (IILasso). Our proposed regularizer suppresses selecting correlated variables, and thus each active variable independently affects the objective variable in the model. Hence, we can interpret regression coefficients intuitively and also improve the performance by avoiding overfitting. We analyze theoretical property of IILasso and show that the proposed method is much advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of IILasso. } }
Endnote
%0 Conference Paper %T Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables %A Masaaki Takada %A Taiji Suzuki %A Hironori Fujisawa %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-takada18a %I PMLR %P 454--463 %U https://proceedings.mlr.press/v84/takada18a.html %V 84 %X Sparse regularization such as l1 regularization is a quite powerful and widely used strategy for high dimensional learning problems. The effectiveness of sparse regularization has been supported practically and theoretically by several studies. However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features. Ordinary l1 regularization can select variables correlated with each other, which results in deterioration of not only its generalization error but also interpretability. In this paper, we pro- pose a new regularization method, “Independently Interpretable Lasso” (IILasso). Our proposed regularizer suppresses selecting correlated variables, and thus each active variable independently affects the objective variable in the model. Hence, we can interpret regression coefficients intuitively and also improve the performance by avoiding overfitting. We analyze theoretical property of IILasso and show that the proposed method is much advantageous for its sign recovery and achieves almost minimax optimal convergence rate. Synthetic and real data analyses also indicate the effectiveness of IILasso.
APA
Takada, M., Suzuki, T. & Fujisawa, H.. (2018). Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:454-463 Available from https://proceedings.mlr.press/v84/takada18a.html.

Related Material