Privacy Amplification via Shuffling for Linear Contextual Bandits

Evrard Garcelon, Kamalika Chaudhuri, Vianney Perchet, Matteo Pirotta
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:381-407, 2022.

Abstract

Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that it is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound $\widetilde{\mathcal{O}}(T^{2/3}/\varepsilon^{1/3})$, while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-garcelon22a, title = {Privacy Amplification via Shuffling for Linear Contextual Bandits}, author = {Garcelon, Evrard and Chaudhuri, Kamalika and Perchet, Vianney and Pirotta, Matteo}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {381--407}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/garcelon22a/garcelon22a.pdf}, url = {https://proceedings.mlr.press/v167/garcelon22a.html}, abstract = {Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that it is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound $\widetilde{\mathcal{O}}(T^{2/3}/\varepsilon^{1/3})$, while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.} }
Endnote
%0 Conference Paper %T Privacy Amplification via Shuffling for Linear Contextual Bandits %A Evrard Garcelon %A Kamalika Chaudhuri %A Vianney Perchet %A Matteo Pirotta %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-garcelon22a %I PMLR %P 381--407 %U https://proceedings.mlr.press/v167/garcelon22a.html %V 167 %X Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that it is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound $\widetilde{\mathcal{O}}(T^{2/3}/\varepsilon^{1/3})$, while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.
APA
Garcelon, E., Chaudhuri, K., Perchet, V. & Pirotta, M.. (2022). Privacy Amplification via Shuffling for Linear Contextual Bandits. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:381-407 Available from https://proceedings.mlr.press/v167/garcelon22a.html.

Related Material