Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:36163625, 2018.
Abstract
In this paper, we showcase the interplay between discrete and continuous optimization in networkstructured settings. We propose the first fully decentralized optimization method for a wide class of nonconvex objective functions that possess a diminishing returns property. More specifically, given an arbitrary connected network and a global continuous submodular function, formed by a sum of local functions, we develop Decentralized Continuous Greedy (DCG), a message passing algorithm that converges to the tight $(11/e)$ approximation factor of the optimum global solution using only local computation and communication. We also provide strong convergence bounds as a function of network size and spectral characteristics of the underlying topology. Interestingly, DCG readily provides a simple recipe for decentralized discrete submodular maximization through the means of continuous relaxations. Formally, we demonstrate that by lifting the local discrete functions to continuous domains and using DCG as an interface we can develop a consensus algorithm that also achieves the tight $(11/e)$ approximation guarantee of the global discrete solution once a proper rounding scheme is applied.
Related Material


