Differentially Private and Lazy Online Convex Optimization

Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Thakurta
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4599-4632, 2023.

Abstract

We study the task of differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. To limit such privacy leakage, we design an optimization-based OCO algorithm that explicitly limits the number of switches via objective perturbation and rejection sampling. This improves over known results in multiple aspects: an optimal leading-order regret term, in being efficiently implementable without requiring log-concave sampling subroutines, and in matching the non-private regret bound for sub-constant regimes of privacy parameters. Leveraging the fact that the algorithm is designed to explicitly minimize the number of switches of decisions, we show that the algorithm also obtains optimal regret bounds in the lazy OCO setting, where the learner is constrained to perform a limited number of switches. In addition, for one- and two-dimensional decision sets, we present a novel approach for differentially private online Lipschitz learning, where the loss functions are Lipschitz but not necessarily convex, that achieves the optimal regret bound matching known lower bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-agarwal23d, title = {Differentially Private and Lazy Online Convex Optimization}, author = {Agarwal, Naman and Kale, Satyen and Singh, Karan and Thakurta, Abhradeep}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {4599--4632}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/agarwal23d/agarwal23d.pdf}, url = {https://proceedings.mlr.press/v195/agarwal23d.html}, abstract = {We study the task of differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. To limit such privacy leakage, we design an optimization-based OCO algorithm that explicitly limits the number of switches via objective perturbation and rejection sampling. This improves over known results in multiple aspects: an optimal leading-order regret term, in being efficiently implementable without requiring log-concave sampling subroutines, and in matching the non-private regret bound for sub-constant regimes of privacy parameters. Leveraging the fact that the algorithm is designed to explicitly minimize the number of switches of decisions, we show that the algorithm also obtains optimal regret bounds in the lazy OCO setting, where the learner is constrained to perform a limited number of switches. In addition, for one- and two-dimensional decision sets, we present a novel approach for differentially private online Lipschitz learning, where the loss functions are Lipschitz but not necessarily convex, that achieves the optimal regret bound matching known lower bounds.} }
Endnote
%0 Conference Paper %T Differentially Private and Lazy Online Convex Optimization %A Naman Agarwal %A Satyen Kale %A Karan Singh %A Abhradeep Thakurta %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-agarwal23d %I PMLR %P 4599--4632 %U https://proceedings.mlr.press/v195/agarwal23d.html %V 195 %X We study the task of differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. To limit such privacy leakage, we design an optimization-based OCO algorithm that explicitly limits the number of switches via objective perturbation and rejection sampling. This improves over known results in multiple aspects: an optimal leading-order regret term, in being efficiently implementable without requiring log-concave sampling subroutines, and in matching the non-private regret bound for sub-constant regimes of privacy parameters. Leveraging the fact that the algorithm is designed to explicitly minimize the number of switches of decisions, we show that the algorithm also obtains optimal regret bounds in the lazy OCO setting, where the learner is constrained to perform a limited number of switches. In addition, for one- and two-dimensional decision sets, we present a novel approach for differentially private online Lipschitz learning, where the loss functions are Lipschitz but not necessarily convex, that achieves the optimal regret bound matching known lower bounds.
APA
Agarwal, N., Kale, S., Singh, K. & Thakurta, A.. (2023). Differentially Private and Lazy Online Convex Optimization. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:4599-4632 Available from https://proceedings.mlr.press/v195/agarwal23d.html.

Related Material