The Power of Adaptivity for Stochastic Submodular Cover

Rohan Ghuge, Anupam Gupta, Viswanath Nagarajan
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3702-3712, 2021.

Abstract

In the stochastic submodular cover problem, the goal is to select a subset of stochastic items of minimum expected cost to cover a submodular function. Solutions in this setting correspond to a sequential decision process that selects items one by one “adaptively” (depending on prior observations). While such adaptive solutions achieve the best objective, the inherently sequential nature makes them undesirable in many applications. We ask: \emph{how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions?} We consider both cases where the stochastic items are independent, and where they are correlated. For both situations, we obtain nearly tight answers, establishing smooth tradeoffs between the number of adaptive rounds and the solution quality, relative to fully adaptive solutions. Experiments on synthetic and real datasets validate the practical performance of our algorithms, showing qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions using just a few rounds of adaptivity are nearly as good as fully adaptive solutions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ghuge21a, title = {The Power of Adaptivity for Stochastic Submodular Cover}, author = {Ghuge, Rohan and Gupta, Anupam and Nagarajan, Viswanath}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3702--3712}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ghuge21a/ghuge21a.pdf}, url = {https://proceedings.mlr.press/v139/ghuge21a.html}, abstract = {In the stochastic submodular cover problem, the goal is to select a subset of stochastic items of minimum expected cost to cover a submodular function. Solutions in this setting correspond to a sequential decision process that selects items one by one “adaptively” (depending on prior observations). While such adaptive solutions achieve the best objective, the inherently sequential nature makes them undesirable in many applications. We ask: \emph{how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions?} We consider both cases where the stochastic items are independent, and where they are correlated. For both situations, we obtain nearly tight answers, establishing smooth tradeoffs between the number of adaptive rounds and the solution quality, relative to fully adaptive solutions. Experiments on synthetic and real datasets validate the practical performance of our algorithms, showing qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions using just a few rounds of adaptivity are nearly as good as fully adaptive solutions.} }
Endnote
%0 Conference Paper %T The Power of Adaptivity for Stochastic Submodular Cover %A Rohan Ghuge %A Anupam Gupta %A Viswanath Nagarajan %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ghuge21a %I PMLR %P 3702--3712 %U https://proceedings.mlr.press/v139/ghuge21a.html %V 139 %X In the stochastic submodular cover problem, the goal is to select a subset of stochastic items of minimum expected cost to cover a submodular function. Solutions in this setting correspond to a sequential decision process that selects items one by one “adaptively” (depending on prior observations). While such adaptive solutions achieve the best objective, the inherently sequential nature makes them undesirable in many applications. We ask: \emph{how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions?} We consider both cases where the stochastic items are independent, and where they are correlated. For both situations, we obtain nearly tight answers, establishing smooth tradeoffs between the number of adaptive rounds and the solution quality, relative to fully adaptive solutions. Experiments on synthetic and real datasets validate the practical performance of our algorithms, showing qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions using just a few rounds of adaptivity are nearly as good as fully adaptive solutions.
APA
Ghuge, R., Gupta, A. & Nagarajan, V.. (2021). The Power of Adaptivity for Stochastic Submodular Cover. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3702-3712 Available from https://proceedings.mlr.press/v139/ghuge21a.html.

Related Material