Differentially Private Online Submodular Minimization
[edit]
Proceedings of Machine Learning Research, PMLR 89:16501658, 2019.
Abstract
In this paper we develop the first algorithms for online submodular minimization that preserve differential privacy under full information feedback and bandit feedback. Our first result is in the full information setting, where the algorithm can observe the entire function after making its decision at each time step. We give an algorithm in this setting that is $\eps$differentially private and achieves expected regret $\tilde{O}\left(\frac{n\sqrt{T}}{\eps}\right)$ over $T$ rounds for a collection of $n$ elements. Our second result is in the bandit setting, where the algorithm can only observe the cost incurred by its chosen set, and does not have access to the entire function. This setting is significantly more challenging due to the limited information. Our algorithm using bandit feedback is $\eps$differentially private and achieves expected regret $\tilde{O}\left(\frac{n^{3/2}T^{2/3}}{\eps}\right)$.
Related Material


