Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling

Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, Ichiro Takeuchi
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1577-1586, 2016.

Abstract

The problem of learning a sparse model is conceptually interpreted as the process of identifying active features/samples and then optimizing the model over them. Recently introduced safe screening allows us to identify a part of non-active features/samples. So far, safe screening has been individually studied either for feature screening or for sample screening. In this paper, we introduce a new approach for safely screening features and samples simultaneously by alternatively iterating feature and sample screening steps. A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa. We first theoretically investigate the synergy effect, and then illustrate the practical advantage through intensive numerical experiments for problems with large numbers of features and samples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-shibagaki16, title = {Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling}, author = {Shibagaki, Atsushi and Karasuyama, Masayuki and Hatano, Kohei and Takeuchi, Ichiro}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1577--1586}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/shibagaki16.pdf}, url = {https://proceedings.mlr.press/v48/shibagaki16.html}, abstract = {The problem of learning a sparse model is conceptually interpreted as the process of identifying active features/samples and then optimizing the model over them. Recently introduced safe screening allows us to identify a part of non-active features/samples. So far, safe screening has been individually studied either for feature screening or for sample screening. In this paper, we introduce a new approach for safely screening features and samples simultaneously by alternatively iterating feature and sample screening steps. A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa. We first theoretically investigate the synergy effect, and then illustrate the practical advantage through intensive numerical experiments for problems with large numbers of features and samples.} }
Endnote
%0 Conference Paper %T Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling %A Atsushi Shibagaki %A Masayuki Karasuyama %A Kohei Hatano %A Ichiro Takeuchi %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-shibagaki16 %I PMLR %P 1577--1586 %U https://proceedings.mlr.press/v48/shibagaki16.html %V 48 %X The problem of learning a sparse model is conceptually interpreted as the process of identifying active features/samples and then optimizing the model over them. Recently introduced safe screening allows us to identify a part of non-active features/samples. So far, safe screening has been individually studied either for feature screening or for sample screening. In this paper, we introduce a new approach for safely screening features and samples simultaneously by alternatively iterating feature and sample screening steps. A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa. We first theoretically investigate the synergy effect, and then illustrate the practical advantage through intensive numerical experiments for problems with large numbers of features and samples.
RIS
TY - CPAPER TI - Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling AU - Atsushi Shibagaki AU - Masayuki Karasuyama AU - Kohei Hatano AU - Ichiro Takeuchi BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-shibagaki16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1577 EP - 1586 L1 - http://proceedings.mlr.press/v48/shibagaki16.pdf UR - https://proceedings.mlr.press/v48/shibagaki16.html AB - The problem of learning a sparse model is conceptually interpreted as the process of identifying active features/samples and then optimizing the model over them. Recently introduced safe screening allows us to identify a part of non-active features/samples. So far, safe screening has been individually studied either for feature screening or for sample screening. In this paper, we introduce a new approach for safely screening features and samples simultaneously by alternatively iterating feature and sample screening steps. A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa. We first theoretically investigate the synergy effect, and then illustrate the practical advantage through intensive numerical experiments for problems with large numbers of features and samples. ER -
APA
Shibagaki, A., Karasuyama, M., Hatano, K. & Takeuchi, I.. (2016). Simultaneous Safe Screening of Features and Samples in Doubly Sparse Modeling. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1577-1586 Available from https://proceedings.mlr.press/v48/shibagaki16.html.

Related Material