On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1379-1398, 2024.

Abstract

In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\epsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O\left(1 / \delta\right)$, the expected number of mistakes incurred by the algorithm grows as \(\Omega\left(\log T\right)\). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of \(T\). To the best of our knowledge, our work is the first result towards settling lower bounds for DP–Online learning and partially addresses the open question in Sanyal and Ramponi (2022).

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-dmitriev24a, title = {On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective}, author = {Dmitriev, Daniil and Szab{\'o}, Krist{\'o}f and Sanyal, Amartya}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {1379--1398}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/dmitriev24a/dmitriev24a.pdf}, url = {https://proceedings.mlr.press/v247/dmitriev24a.html}, abstract = { In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\epsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O\left(1 / \delta\right)$, the expected number of mistakes incurred by the algorithm grows as \(\Omega\left(\log T\right)\). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of \(T\). To the best of our knowledge, our work is the first result towards settling lower bounds for DP–Online learning and partially addresses the open question in Sanyal and Ramponi (2022).} }
Endnote
%0 Conference Paper %T On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective %A Daniil Dmitriev %A Kristóf Szabó %A Amartya Sanyal %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-dmitriev24a %I PMLR %P 1379--1398 %U https://proceedings.mlr.press/v247/dmitriev24a.html %V 247 %X In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\epsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O\left(1 / \delta\right)$, the expected number of mistakes incurred by the algorithm grows as \(\Omega\left(\log T\right)\). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of \(T\). To the best of our knowledge, our work is the first result towards settling lower bounds for DP–Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
APA
Dmitriev, D., Szabó, K. & Sanyal, A.. (2024). On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:1379-1398 Available from https://proceedings.mlr.press/v247/dmitriev24a.html.

Related Material