[edit]
Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26116-26134, 2022.
Abstract
In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function F derived from a factor-revealing optimization problem, whose any stationary point provides a (1−e−γ)-approximation to the global maximum of the γ-weakly DR-submodular objective function f∈C1,1L(X). Under the offline scenario, we propose a boosting gradient ascent method achieving (1−e−γ−ϵ2)-approximation after O(1/ϵ2) iterations, which improves the (γ21+γ2) approximation ratio of the classical gradient ascent algorithm. In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function F. Meanwhile, we verify that this boosting online algorithm achieves a regret of O(√D) against a (1−e−γ)-approximation to the best feasible solution in hindsight, where D is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain O(√T) regret against a (1−e−γ)-approximation with O(1) gradient inquiry at each time step, when no delay exists, i.e., D=T. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.