Localized Lasso for High-Dimensional Regression

Makoto Yamada, Takeuchi Koh, Tomoharu Iwata, John Shawe-Taylor, Samuel Kaski
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:325-333, 2017.

Abstract

We introduce the localized Lasso, which learns models that both are interpretable and have a high predictive power in problems with high dimensionality d and small sample size n. More specifically, we consider a function defined by local sparse models, one at each data point. We introduce sample-wise network regularization to borrow strength across the models, and sample-wise exclusive group sparsity (a.k.a., l12 norm) to introduce diversity into the choice of feature sets in the local models. The local models are interpretable in terms of similarity of their sparsity patterns. The cost function is convex, and thus has a globally optimal solution. Moreover, we propose a simple yet efficient iterative least-squares based optimization procedure for the localized Lasso, which does not need a tuning parameter, and is guaranteed to converge to a globally optimal solution. The solution is empirically shown to outperform alternatives for both simulated and genomic personalized/precision medicine data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-yamada17a, title = {{Localized Lasso for High-Dimensional Regression}}, author = {Yamada, Makoto and Koh, Takeuchi and Iwata, Tomoharu and Shawe-Taylor, John and Kaski, Samuel}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {325--333}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/yamada17a/yamada17a.pdf}, url = {https://proceedings.mlr.press/v54/yamada17a.html}, abstract = {We introduce the localized Lasso, which learns models that both are interpretable and have a high predictive power in problems with high dimensionality d and small sample size n. More specifically, we consider a function defined by local sparse models, one at each data point. We introduce sample-wise network regularization to borrow strength across the models, and sample-wise exclusive group sparsity (a.k.a., l12 norm) to introduce diversity into the choice of feature sets in the local models. The local models are interpretable in terms of similarity of their sparsity patterns. The cost function is convex, and thus has a globally optimal solution. Moreover, we propose a simple yet efficient iterative least-squares based optimization procedure for the localized Lasso, which does not need a tuning parameter, and is guaranteed to converge to a globally optimal solution. The solution is empirically shown to outperform alternatives for both simulated and genomic personalized/precision medicine data.} }
Endnote
%0 Conference Paper %T Localized Lasso for High-Dimensional Regression %A Makoto Yamada %A Takeuchi Koh %A Tomoharu Iwata %A John Shawe-Taylor %A Samuel Kaski %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-yamada17a %I PMLR %P 325--333 %U https://proceedings.mlr.press/v54/yamada17a.html %V 54 %X We introduce the localized Lasso, which learns models that both are interpretable and have a high predictive power in problems with high dimensionality d and small sample size n. More specifically, we consider a function defined by local sparse models, one at each data point. We introduce sample-wise network regularization to borrow strength across the models, and sample-wise exclusive group sparsity (a.k.a., l12 norm) to introduce diversity into the choice of feature sets in the local models. The local models are interpretable in terms of similarity of their sparsity patterns. The cost function is convex, and thus has a globally optimal solution. Moreover, we propose a simple yet efficient iterative least-squares based optimization procedure for the localized Lasso, which does not need a tuning parameter, and is guaranteed to converge to a globally optimal solution. The solution is empirically shown to outperform alternatives for both simulated and genomic personalized/precision medicine data.
APA
Yamada, M., Koh, T., Iwata, T., Shawe-Taylor, J. & Kaski, S.. (2017). Localized Lasso for High-Dimensional Regression. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:325-333 Available from https://proceedings.mlr.press/v54/yamada17a.html.

Related Material