High Probability Bounds for Stochastic Continuous Submodular Maximization

Evan Becker, Jingdong Gao, Ted Zadouri, Baharan Mirzasoleiman
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5958-5979, 2023.

Abstract

We consider maximization of stochastic monotone continuous submodular functions (CSF) with a diminishing return property. Existing algorithms only guarantee the performance in expectation, and do not bound the probability of getting a bad solution. This implies that for a particular run of the algorithms, the solution may be much worse than the provided guarantee in expectation. In this paper, we first empirically verify that this is indeed the case. Then, we provide the first high-probability analysis of the existing methods for stochastic CSF maximization, namely PGA, boosted PGA, SCG, and SCG++. Finally, we provide an improved high-probability bound for SCG, under slightly stronger assumptions, with a better convergence rate than that of the expected solution. Through extensive experiments on non-concave quadratic programming (NQP) and optimal budget allocation, we confirm the validity of our bounds and show that even in the worst-case, PGA converges to $OPT/2$, and boosted PGA, SCG, SCG++ converge to $(1 - 1/e)OPT$, but at a slower rate than that of the expected solution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-becker23a, title = {High Probability Bounds for Stochastic Continuous Submodular Maximization}, author = {Becker, Evan and Gao, Jingdong and Zadouri, Ted and Mirzasoleiman, Baharan}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {5958--5979}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/becker23a/becker23a.pdf}, url = {https://proceedings.mlr.press/v206/becker23a.html}, abstract = {We consider maximization of stochastic monotone continuous submodular functions (CSF) with a diminishing return property. Existing algorithms only guarantee the performance in expectation, and do not bound the probability of getting a bad solution. This implies that for a particular run of the algorithms, the solution may be much worse than the provided guarantee in expectation. In this paper, we first empirically verify that this is indeed the case. Then, we provide the first high-probability analysis of the existing methods for stochastic CSF maximization, namely PGA, boosted PGA, SCG, and SCG++. Finally, we provide an improved high-probability bound for SCG, under slightly stronger assumptions, with a better convergence rate than that of the expected solution. Through extensive experiments on non-concave quadratic programming (NQP) and optimal budget allocation, we confirm the validity of our bounds and show that even in the worst-case, PGA converges to $OPT/2$, and boosted PGA, SCG, SCG++ converge to $(1 - 1/e)OPT$, but at a slower rate than that of the expected solution.} }
Endnote
%0 Conference Paper %T High Probability Bounds for Stochastic Continuous Submodular Maximization %A Evan Becker %A Jingdong Gao %A Ted Zadouri %A Baharan Mirzasoleiman %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-becker23a %I PMLR %P 5958--5979 %U https://proceedings.mlr.press/v206/becker23a.html %V 206 %X We consider maximization of stochastic monotone continuous submodular functions (CSF) with a diminishing return property. Existing algorithms only guarantee the performance in expectation, and do not bound the probability of getting a bad solution. This implies that for a particular run of the algorithms, the solution may be much worse than the provided guarantee in expectation. In this paper, we first empirically verify that this is indeed the case. Then, we provide the first high-probability analysis of the existing methods for stochastic CSF maximization, namely PGA, boosted PGA, SCG, and SCG++. Finally, we provide an improved high-probability bound for SCG, under slightly stronger assumptions, with a better convergence rate than that of the expected solution. Through extensive experiments on non-concave quadratic programming (NQP) and optimal budget allocation, we confirm the validity of our bounds and show that even in the worst-case, PGA converges to $OPT/2$, and boosted PGA, SCG, SCG++ converge to $(1 - 1/e)OPT$, but at a slower rate than that of the expected solution.
APA
Becker, E., Gao, J., Zadouri, T. & Mirzasoleiman, B.. (2023). High Probability Bounds for Stochastic Continuous Submodular Maximization. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:5958-5979 Available from https://proceedings.mlr.press/v206/becker23a.html.

Related Material