Scalable Greedy Feature Selection via Weak Submodularity

Rajiv Khanna, Ethan Elenberg, Alex Dimakis, Sahand Negahban, Joydeep Ghosh
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1560-1568, 2017.

Abstract

Greedy algorithms are widely used for problems in machine learning such as feature selection and set function optimization. Unfortunately, for large datasets, the running time of even greedy algorithms can be quite high. This is because for each greedy step we need to refit a model or calculate a function using the previously selected choices and the new candidate. Two algorithms that are faster approximations to the greedy forward selection were introduced recently [Mirzasoleiman et al., 2013, 2015]. They achieve better performance by exploiting stochastic evaluation and distributed computation respectively. Both algorithms have provable performance guarantees for submodular functions. In this paper we show that divergent from previously held opinion, submodularity is not required to obtain approximation guarantees for these two algorithms. Specifically, we show that a generalized concept of weak submodularity suffices to give multiplicative approximation guarantees. Our result extends the applicability of these algorithms to a larger class of functions. Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions. We empirically validate our work by showing superior performance of fast greedy approximations versus several established baselines on artificial and real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-khanna17b, title = {{Scalable Greedy Feature Selection via Weak Submodularity}}, author = {Khanna, Rajiv and Elenberg, Ethan and Dimakis, Alex and Negahban, Sahand and Ghosh, Joydeep}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1560--1568}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/khanna17b/khanna17b.pdf}, url = {https://proceedings.mlr.press/v54/khanna17b.html}, abstract = {Greedy algorithms are widely used for problems in machine learning such as feature selection and set function optimization. Unfortunately, for large datasets, the running time of even greedy algorithms can be quite high. This is because for each greedy step we need to refit a model or calculate a function using the previously selected choices and the new candidate. Two algorithms that are faster approximations to the greedy forward selection were introduced recently [Mirzasoleiman et al., 2013, 2015]. They achieve better performance by exploiting stochastic evaluation and distributed computation respectively. Both algorithms have provable performance guarantees for submodular functions. In this paper we show that divergent from previously held opinion, submodularity is not required to obtain approximation guarantees for these two algorithms. Specifically, we show that a generalized concept of weak submodularity suffices to give multiplicative approximation guarantees. Our result extends the applicability of these algorithms to a larger class of functions. Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions. We empirically validate our work by showing superior performance of fast greedy approximations versus several established baselines on artificial and real datasets. } }
Endnote
%0 Conference Paper %T Scalable Greedy Feature Selection via Weak Submodularity %A Rajiv Khanna %A Ethan Elenberg %A Alex Dimakis %A Sahand Negahban %A Joydeep Ghosh %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-khanna17b %I PMLR %P 1560--1568 %U https://proceedings.mlr.press/v54/khanna17b.html %V 54 %X Greedy algorithms are widely used for problems in machine learning such as feature selection and set function optimization. Unfortunately, for large datasets, the running time of even greedy algorithms can be quite high. This is because for each greedy step we need to refit a model or calculate a function using the previously selected choices and the new candidate. Two algorithms that are faster approximations to the greedy forward selection were introduced recently [Mirzasoleiman et al., 2013, 2015]. They achieve better performance by exploiting stochastic evaluation and distributed computation respectively. Both algorithms have provable performance guarantees for submodular functions. In this paper we show that divergent from previously held opinion, submodularity is not required to obtain approximation guarantees for these two algorithms. Specifically, we show that a generalized concept of weak submodularity suffices to give multiplicative approximation guarantees. Our result extends the applicability of these algorithms to a larger class of functions. Furthermore, we show that a bounded submodularity ratio can be used to provide data dependent bounds that can sometimes be tighter also for submodular functions. We empirically validate our work by showing superior performance of fast greedy approximations versus several established baselines on artificial and real datasets.
APA
Khanna, R., Elenberg, E., Dimakis, A., Negahban, S. & Ghosh, J.. (2017). Scalable Greedy Feature Selection via Weak Submodularity. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:1560-1568 Available from https://proceedings.mlr.press/v54/khanna17b.html.

Related Material