Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization

Moran Feldman, Christopher Harshaw, Amin Karbasi
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:758-784, 2017.

Abstract

It is known that greedy methods perform well for maximizing \textitmonotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show—arguably, surprisingly—that invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the (currently) fastest deterministic algorithm, called RepeatedGreedy, for maximizing a general submodular function subject to $k$-independent system constraints. RepeatedGreedy achieves $(1 + O(1/\sqrt{k}))k$ approximation using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only \textitonce and obtain the (currently) fastest randomized algorithm, called SampleGreedy, for maximizing a submodular function subject to $k$-extendible system constraints (a subclass of $k$-independent system constrains). SampleGreedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we compare the performance of RepeatedGreedy and SampleGreedy with prior art in a concrete application (movie recommendation). We consistently observe that while SampleGreedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-feldman17b, title = {Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization}, author = {Feldman, Moran and Harshaw, Christopher and Karbasi, Amin}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {758--784}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/feldman17b/feldman17b.pdf}, url = {https://proceedings.mlr.press/v65/feldman17b.html}, abstract = {It is known that greedy methods perform well for maximizing \textitmonotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show—arguably, surprisingly—that invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the (currently) fastest deterministic algorithm, called RepeatedGreedy, for maximizing a general submodular function subject to $k$-independent system constraints. RepeatedGreedy achieves $(1 + O(1/\sqrt{k}))k$ approximation using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only \textitonce and obtain the (currently) fastest randomized algorithm, called SampleGreedy, for maximizing a submodular function subject to $k$-extendible system constraints (a subclass of $k$-independent system constrains). SampleGreedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we compare the performance of RepeatedGreedy and SampleGreedy with prior art in a concrete application (movie recommendation). We consistently observe that while SampleGreedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.} }
Endnote
%0 Conference Paper %T Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization %A Moran Feldman %A Christopher Harshaw %A Amin Karbasi %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-feldman17b %I PMLR %P 758--784 %U https://proceedings.mlr.press/v65/feldman17b.html %V 65 %X It is known that greedy methods perform well for maximizing \textitmonotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show—arguably, surprisingly—that invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the (currently) fastest deterministic algorithm, called RepeatedGreedy, for maximizing a general submodular function subject to $k$-independent system constraints. RepeatedGreedy achieves $(1 + O(1/\sqrt{k}))k$ approximation using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only \textitonce and obtain the (currently) fastest randomized algorithm, called SampleGreedy, for maximizing a submodular function subject to $k$-extendible system constraints (a subclass of $k$-independent system constrains). SampleGreedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we compare the performance of RepeatedGreedy and SampleGreedy with prior art in a concrete application (movie recommendation). We consistently observe that while SampleGreedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.
APA
Feldman, M., Harshaw, C. & Karbasi, A.. (2017). Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:758-784 Available from https://proceedings.mlr.press/v65/feldman17b.html.

Related Material