“Bring Your Own Greedy”+Max: Near-Optimal 1/2-Approximations for Submodular Knapsack

Grigory Yaroslavtsev, Samson Zhou, Dmitrii Avdiukhin
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3263-3274, 2020.

Abstract

The problem of selecting a small-size representative summary of a large dataset is a cornerstone of machine learning, optimization and data science. Motivated by applications to recommendation systems and other scenarios with query-limited access to vast amounts of data, we propose a new rigorous algorithmic framework for a standard formulation of this problem as a submodular maximization subject to a linear (knapsack) constraint. Our framework is based on augmenting all partial Greedy solutions with the best additional item. It can be instantiated with negligible overhead in any model of computation, which allows the classic greedy algorithm and its variants to be implemented. We give such instantiations in the offline Gready+Max, multi-pass streaming Sieve+Max and distributed Distributed Sieve+Max settings. Our algorithms give ($1/2-\eps$)-approximation with most other key parameters of interest being near-optimal. Our analysis is based on a new set of first-order linear differential inequalities and their robust approximate versions. Experiments on typical datasets (movie recommendations, influence maximization) confirm scalability and high quality of solutions obtained via our framework. Instance-specific approximations are typically in the 0.6-0.7 range and frequently beat even the $(1-1/e) \approx 0.63$ worst-case barrier for polynomial-time algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-yaroslavtsev20a, title = {“Bring Your Own Greedy”+Max: Near-Optimal 1/2-Approximations for Submodular Knapsack}, author = {Yaroslavtsev, Grigory and Zhou, Samson and Avdiukhin, Dmitrii}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3263--3274}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/yaroslavtsev20a/yaroslavtsev20a.pdf}, url = {https://proceedings.mlr.press/v108/yaroslavtsev20a.html}, abstract = {The problem of selecting a small-size representative summary of a large dataset is a cornerstone of machine learning, optimization and data science. Motivated by applications to recommendation systems and other scenarios with query-limited access to vast amounts of data, we propose a new rigorous algorithmic framework for a standard formulation of this problem as a submodular maximization subject to a linear (knapsack) constraint. Our framework is based on augmenting all partial Greedy solutions with the best additional item. It can be instantiated with negligible overhead in any model of computation, which allows the classic greedy algorithm and its variants to be implemented. We give such instantiations in the offline Gready+Max, multi-pass streaming Sieve+Max and distributed Distributed Sieve+Max settings. Our algorithms give ($1/2-\eps$)-approximation with most other key parameters of interest being near-optimal. Our analysis is based on a new set of first-order linear differential inequalities and their robust approximate versions. Experiments on typical datasets (movie recommendations, influence maximization) confirm scalability and high quality of solutions obtained via our framework. Instance-specific approximations are typically in the 0.6-0.7 range and frequently beat even the $(1-1/e) \approx 0.63$ worst-case barrier for polynomial-time algorithms.} }
Endnote
%0 Conference Paper %T “Bring Your Own Greedy”+Max: Near-Optimal 1/2-Approximations for Submodular Knapsack %A Grigory Yaroslavtsev %A Samson Zhou %A Dmitrii Avdiukhin %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-yaroslavtsev20a %I PMLR %P 3263--3274 %U https://proceedings.mlr.press/v108/yaroslavtsev20a.html %V 108 %X The problem of selecting a small-size representative summary of a large dataset is a cornerstone of machine learning, optimization and data science. Motivated by applications to recommendation systems and other scenarios with query-limited access to vast amounts of data, we propose a new rigorous algorithmic framework for a standard formulation of this problem as a submodular maximization subject to a linear (knapsack) constraint. Our framework is based on augmenting all partial Greedy solutions with the best additional item. It can be instantiated with negligible overhead in any model of computation, which allows the classic greedy algorithm and its variants to be implemented. We give such instantiations in the offline Gready+Max, multi-pass streaming Sieve+Max and distributed Distributed Sieve+Max settings. Our algorithms give ($1/2-\eps$)-approximation with most other key parameters of interest being near-optimal. Our analysis is based on a new set of first-order linear differential inequalities and their robust approximate versions. Experiments on typical datasets (movie recommendations, influence maximization) confirm scalability and high quality of solutions obtained via our framework. Instance-specific approximations are typically in the 0.6-0.7 range and frequently beat even the $(1-1/e) \approx 0.63$ worst-case barrier for polynomial-time algorithms.
APA
Yaroslavtsev, G., Zhou, S. & Avdiukhin, D.. (2020). “Bring Your Own Greedy”+Max: Near-Optimal 1/2-Approximations for Submodular Knapsack. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3263-3274 Available from https://proceedings.mlr.press/v108/yaroslavtsev20a.html.

Related Material