Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:304313, 2018.
Abstract
We analyze the performance of the greedy algorithm, and also a discrete semigradient based algorithm, for maximizing the sum of a suBmodular and suPermodular (BP) function (both of which are nonnegative monotone nondecreasing) under two types of constraints, either a cardinality constraint or $p\geq 1$ matroid independence constraints. These problems occur naturally in several realworld applications in data science, machine learning, and artificial intelligence. The problems are ordinarily inapproximable to any factor. Using the curvature $\curv_f$ of the submodular term, and introducing $\curv^g$ for the supermodular term (a natural dual curvature for supermodular functions), however, both of which are computable in linear time, we show that BP maximization can be efficiently approximated by both the greedy and the semigradient based algorithm. The algorithms yield multiplicative guarantees of $\frac{1}{\curv_f}\left[1e^{(1\curv^g)\curv_f}\right]$ and $\frac{1\curv^g}{(1\curv^g)\curv_f + p}$ for the two types of constraints respectively. For pure monotone supermodular constrained maximization, these yield $1\curvg$ and $(1\curvg)/p$ for the two types of constraints respectively. We also analyze the hardness of BP maximization and show that our guarantees match hardness by a constant factor and by $O(\ln(p))$ respectively. Computational experiments are also provided supporting our analysis.
Related Material


