Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints

Omid Sadeghi, Maryam Fazel
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2908-2916, 2021.

Abstract

Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $(1-\frac{\kappa}{e})$-approximation algorithm, where $\kappa\in[0,1]$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $Ø(\sqrt{T})$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-sadeghi21a, title = { Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints }, author = {Sadeghi, Omid and Fazel, Maryam}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2908--2916}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/sadeghi21a/sadeghi21a.pdf}, url = {https://proceedings.mlr.press/v130/sadeghi21a.html}, abstract = { Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $(1-\frac{\kappa}{e})$-approximation algorithm, where $\kappa\in[0,1]$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $Ø(\sqrt{T})$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting. } }
Endnote
%0 Conference Paper %T Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints %A Omid Sadeghi %A Maryam Fazel %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-sadeghi21a %I PMLR %P 2908--2916 %U https://proceedings.mlr.press/v130/sadeghi21a.html %V 130 %X Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $(1-\frac{\kappa}{e})$-approximation algorithm, where $\kappa\in[0,1]$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $Ø(\sqrt{T})$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting.
APA
Sadeghi, O. & Fazel, M.. (2021). Differentially Private Monotone Submodular Maximization Under Matroid and Knapsack Constraints . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2908-2916 Available from https://proceedings.mlr.press/v130/sadeghi21a.html.

Related Material