[edit]
Differentially Private and Lazy Online Convex Optimization
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4599-4632, 2023.
Abstract
We study the task of differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. To limit such privacy leakage, we design an optimization-based OCO algorithm that explicitly limits the number of switches via objective perturbation and rejection sampling. This improves over known results in multiple aspects: an optimal leading-order regret term, in being efficiently implementable without requiring log-concave sampling subroutines, and in matching the non-private regret bound for sub-constant regimes of privacy parameters. Leveraging the fact that the algorithm is designed to explicitly minimize the number of switches of decisions, we show that the algorithm also obtains optimal regret bounds in the lazy OCO setting, where the learner is constrained to perform a limited number of switches. In addition, for one- and two-dimensional decision sets, we present a novel approach for differentially private online Lipschitz learning, where the loss functions are Lipschitz but not necessarily convex, that achieves the optimal regret bound matching known lower bounds.