[edit]
Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:758-784, 2017.
Abstract
It is known that greedy methods perform well for maximizing \textitmonotone submodular functions. At the same time, such methods perform poorly in the face of non-monotonicity. In this paper, we show—arguably, surprisingly—that invoking the classical greedy algorithm O(√k)-times leads to the (currently) fastest deterministic algorithm, called RepeatedGreedy, for maximizing a general submodular function subject to k-independent system constraints. RepeatedGreedy achieves (1+O(1/√k))k approximation using O(nr√k) function evaluations (here, n and r denote the size of the ground set and the maximum size of a feasible solution, respectively). We then show that by a careful sampling procedure, we can run the greedy algorithm only \textitonce and obtain the (currently) fastest randomized algorithm, called SampleGreedy, for maximizing a submodular function subject to k-extendible system constraints (a subclass of k-independent system constrains). SampleGreedy achieves (k+3)-approximation with only O(nr/k) function evaluations. Finally, we derive an almost matching lower bound, and show that no polynomial time algorithm can have an approximation ratio smaller than k+1/2−ε. To further support our theoretical results, we compare the performance of RepeatedGreedy and SampleGreedy with prior art in a concrete application (movie recommendation). We consistently observe that while SampleGreedy achieves practically the same utility as the best baseline, it performs at least two orders of magnitude faster.