On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness

Sebastian Pokutta, Mohit Singh, Alfredo Torrico
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7772-7782, 2020.

Abstract

It is well known that the standard greedy algorithm guarantees a worst-case approximation factor of $1-1/e$ when maximizing a monotone submodular function under a cardinality constraint. However, empirical studies show that its performance is substantially better in practice. This raises a natural question of explaining this improved performance of the greedy algorithm. In this work, we define sharpness for submodular functions as a candidate explanation for this phenomenon. We show that the greedy algorithm provably performs better as the sharpness of the submodular function increases. This improvement ties in closely with the faster convergence rates of first order methods for sharp functions in convex optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-pokutta20a, title = {On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness}, author = {Pokutta, Sebastian and Singh, Mohit and Torrico, Alfredo}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7772--7782}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/pokutta20a/pokutta20a.pdf}, url = {https://proceedings.mlr.press/v119/pokutta20a.html}, abstract = {It is well known that the standard greedy algorithm guarantees a worst-case approximation factor of $1-1/e$ when maximizing a monotone submodular function under a cardinality constraint. However, empirical studies show that its performance is substantially better in practice. This raises a natural question of explaining this improved performance of the greedy algorithm. In this work, we define sharpness for submodular functions as a candidate explanation for this phenomenon. We show that the greedy algorithm provably performs better as the sharpness of the submodular function increases. This improvement ties in closely with the faster convergence rates of first order methods for sharp functions in convex optimization.} }
Endnote
%0 Conference Paper %T On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness %A Sebastian Pokutta %A Mohit Singh %A Alfredo Torrico %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-pokutta20a %I PMLR %P 7772--7782 %U https://proceedings.mlr.press/v119/pokutta20a.html %V 119 %X It is well known that the standard greedy algorithm guarantees a worst-case approximation factor of $1-1/e$ when maximizing a monotone submodular function under a cardinality constraint. However, empirical studies show that its performance is substantially better in practice. This raises a natural question of explaining this improved performance of the greedy algorithm. In this work, we define sharpness for submodular functions as a candidate explanation for this phenomenon. We show that the greedy algorithm provably performs better as the sharpness of the submodular function increases. This improvement ties in closely with the faster convergence rates of first order methods for sharp functions in convex optimization.
APA
Pokutta, S., Singh, M. & Torrico, A.. (2020). On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7772-7782 Available from https://proceedings.mlr.press/v119/pokutta20a.html.

Related Material