Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements

Naman Agarwal, Satyen Kale, Karan Singh, Abhradeep Guha Thakurta
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:343-361, 2024.

Abstract

We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, even for smooth losses, the resulting regret guarantees improve upon previous results in terms their dependence of dimension. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-agarwal24d, title = {Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements}, author = {Agarwal, Naman and Kale, Satyen and Singh, Karan and Guha Thakurta, Abhradeep}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {343--361}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/agarwal24d/agarwal24d.pdf}, url = {https://proceedings.mlr.press/v235/agarwal24d.html}, abstract = {We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, even for smooth losses, the resulting regret guarantees improve upon previous results in terms their dependence of dimension. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.} }
Endnote
%0 Conference Paper %T Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements %A Naman Agarwal %A Satyen Kale %A Karan Singh %A Abhradeep Guha Thakurta %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-agarwal24d %I PMLR %P 343--361 %U https://proceedings.mlr.press/v235/agarwal24d.html %V 235 %X We design differentially private regret-minimizing algorithms in the online convex optimization (OCO) framework. Unlike recent results, our algorithms and analyses do not require smoothness, thus yielding the first private regret bounds with an optimal leading-order term for non-smooth loss functions. Additionally, even for smooth losses, the resulting regret guarantees improve upon previous results in terms their dependence of dimension. Our results provide the best known rates for DP-OCO in all practical regimes of the privacy parameter, barring when it is exceptionally small. The principal innovation in our algorithm design is the use of sampling from strongly log-concave densities which satisfy the Log-Sobolev Inequality. The resulting concentration of measure allows us to obtain a better trade-off for the dimension factors than prior work, leading to improved results. Following previous works on DP-OCO, the proposed algorithm explicitly limits the number of switches via rejection sampling. Thus, independently of privacy constraints, the algorithm also provides improved results for online convex optimization with a switching budget.
APA
Agarwal, N., Kale, S., Singh, K. & Guha Thakurta, A.. (2024). Improved Differentially Private and Lazy Online Convex Optimization: Lower Regret without Smoothness Requirements. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:343-361 Available from https://proceedings.mlr.press/v235/agarwal24d.html.

Related Material