Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings

Aryan Mokhtari, Hamed Hassani, Amin Karbasi
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3616-3625, 2018.

Abstract

In this paper, we showcase the interplay between discrete and continuous optimization in network-structured settings. We propose the first fully decentralized optimization method for a wide class of non-convex objective functions that possess a diminishing returns property. More specifically, given an arbitrary connected network and a global continuous submodular function, formed by a sum of local functions, we develop Decentralized Continuous Greedy (DCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication. We also provide strong convergence bounds as a function of network size and spectral characteristics of the underlying topology. Interestingly, DCG readily provides a simple recipe for decentralized discrete submodular maximization through the means of continuous relaxations. Formally, we demonstrate that by lifting the local discrete functions to continuous domains and using DCG as an interface we can develop a consensus algorithm that also achieves the tight $(1-1/e)$ approximation guarantee of the global discrete solution once a proper rounding scheme is applied.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-mokhtari18a, title = {Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings}, author = {Mokhtari, Aryan and Hassani, Hamed and Karbasi, Amin}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3616--3625}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/mokhtari18a/mokhtari18a.pdf}, url = {https://proceedings.mlr.press/v80/mokhtari18a.html}, abstract = {In this paper, we showcase the interplay between discrete and continuous optimization in network-structured settings. We propose the first fully decentralized optimization method for a wide class of non-convex objective functions that possess a diminishing returns property. More specifically, given an arbitrary connected network and a global continuous submodular function, formed by a sum of local functions, we develop Decentralized Continuous Greedy (DCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication. We also provide strong convergence bounds as a function of network size and spectral characteristics of the underlying topology. Interestingly, DCG readily provides a simple recipe for decentralized discrete submodular maximization through the means of continuous relaxations. Formally, we demonstrate that by lifting the local discrete functions to continuous domains and using DCG as an interface we can develop a consensus algorithm that also achieves the tight $(1-1/e)$ approximation guarantee of the global discrete solution once a proper rounding scheme is applied.} }
Endnote
%0 Conference Paper %T Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings %A Aryan Mokhtari %A Hamed Hassani %A Amin Karbasi %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-mokhtari18a %I PMLR %P 3616--3625 %U https://proceedings.mlr.press/v80/mokhtari18a.html %V 80 %X In this paper, we showcase the interplay between discrete and continuous optimization in network-structured settings. We propose the first fully decentralized optimization method for a wide class of non-convex objective functions that possess a diminishing returns property. More specifically, given an arbitrary connected network and a global continuous submodular function, formed by a sum of local functions, we develop Decentralized Continuous Greedy (DCG), a message passing algorithm that converges to the tight $(1-1/e)$ approximation factor of the optimum global solution using only local computation and communication. We also provide strong convergence bounds as a function of network size and spectral characteristics of the underlying topology. Interestingly, DCG readily provides a simple recipe for decentralized discrete submodular maximization through the means of continuous relaxations. Formally, we demonstrate that by lifting the local discrete functions to continuous domains and using DCG as an interface we can develop a consensus algorithm that also achieves the tight $(1-1/e)$ approximation guarantee of the global discrete solution once a proper rounding scheme is applied.
APA
Mokhtari, A., Hassani, H. & Karbasi, A.. (2018). Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3616-3625 Available from https://proceedings.mlr.press/v80/mokhtari18a.html.

Related Material