Private Online Prediction from Experts: Separations and Faster Rates

Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:674-699, 2023.

Abstract

Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of $\wt O(\sqrt{T \log d} + \log d/\eps)$ for the stochastic setting and $\wt O(\sqrt{T \log d} + T^{1/3} \log d/\eps)$ for oblivious adversaries (where $d$ is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime $d \ge T$. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-asi23a, title = {Private Online Prediction from Experts: Separations and Faster Rates}, author = {Asi, Hilal and Feldman, Vitaly and Koren, Tomer and Talwar, Kunal}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {674--699}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/asi23a/asi23a.pdf}, url = {https://proceedings.mlr.press/v195/asi23a.html}, abstract = {Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of $\wt O(\sqrt{T \log d} + \log d/\eps)$ for the stochastic setting and $\wt O(\sqrt{T \log d} + T^{1/3} \log d/\eps)$ for oblivious adversaries (where $d$ is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime $d \ge T$. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.} }
Endnote
%0 Conference Paper %T Private Online Prediction from Experts: Separations and Faster Rates %A Hilal Asi %A Vitaly Feldman %A Tomer Koren %A Kunal Talwar %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-asi23a %I PMLR %P 674--699 %U https://proceedings.mlr.press/v195/asi23a.html %V 195 %X Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints. We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries. For approximate differential privacy, our algorithms achieve regret bounds of $\wt O(\sqrt{T \log d} + \log d/\eps)$ for the stochastic setting and $\wt O(\sqrt{T \log d} + T^{1/3} \log d/\eps)$ for oblivious adversaries (where $d$ is the number of experts). For pure DP, our algorithms are the first to obtain sub-linear regret for oblivious adversaries in the high-dimensional regime $d \ge T$. Moreover, we prove new lower bounds for adaptive adversaries. Our results imply that unlike the non-private setting, there is a strong separation between the optimal regret for adaptive and non-adaptive adversaries for this problem. Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.
APA
Asi, H., Feldman, V., Koren, T. & Talwar, K.. (2023). Private Online Prediction from Experts: Separations and Faster Rates. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:674-699 Available from https://proceedings.mlr.press/v195/asi23a.html.

Related Material