Encrypted Linear Contextual Bandit

Evrard Garcelon, Matteo Pirotta, Vianney Perchet
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2519-2551, 2022.

Abstract

Contextual bandit is a general framework for online learning in sequential decision-making problems that has found application in a wide range of domains, including recommendation systems, online advertising, and clinical trials. A critical aspect of bandit methods is that they require to observe the contexts –i.e., individual or group-level data– and rewards in order to solve the sequential problem. The large deployment in industrial applications has increased interest in methods that preserve the users’ privacy. In this paper, we introduce a privacy-preserving bandit framework based on homomorphic encryption which allows computations using encrypted data. The algorithm only observes encrypted information (contexts and rewards) and has no ability to decrypt it. Leveraging the properties of homomorphic encryption, we show that despite the complexity of the setting, it is possible to solve linear contextual bandits over encrypted data with a $\widetilde{O}(d\sqrt{T})$ regret bound in any linear contextual bandit problem, while keeping data encrypted.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-garcelon22a, title = { Encrypted Linear Contextual Bandit }, author = {Garcelon, Evrard and Pirotta, Matteo and Perchet, Vianney}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2519--2551}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/garcelon22a/garcelon22a.pdf}, url = {https://proceedings.mlr.press/v151/garcelon22a.html}, abstract = { Contextual bandit is a general framework for online learning in sequential decision-making problems that has found application in a wide range of domains, including recommendation systems, online advertising, and clinical trials. A critical aspect of bandit methods is that they require to observe the contexts –i.e., individual or group-level data– and rewards in order to solve the sequential problem. The large deployment in industrial applications has increased interest in methods that preserve the users’ privacy. In this paper, we introduce a privacy-preserving bandit framework based on homomorphic encryption which allows computations using encrypted data. The algorithm only observes encrypted information (contexts and rewards) and has no ability to decrypt it. Leveraging the properties of homomorphic encryption, we show that despite the complexity of the setting, it is possible to solve linear contextual bandits over encrypted data with a $\widetilde{O}(d\sqrt{T})$ regret bound in any linear contextual bandit problem, while keeping data encrypted. } }
Endnote
%0 Conference Paper %T Encrypted Linear Contextual Bandit %A Evrard Garcelon %A Matteo Pirotta %A Vianney Perchet %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-garcelon22a %I PMLR %P 2519--2551 %U https://proceedings.mlr.press/v151/garcelon22a.html %V 151 %X Contextual bandit is a general framework for online learning in sequential decision-making problems that has found application in a wide range of domains, including recommendation systems, online advertising, and clinical trials. A critical aspect of bandit methods is that they require to observe the contexts –i.e., individual or group-level data– and rewards in order to solve the sequential problem. The large deployment in industrial applications has increased interest in methods that preserve the users’ privacy. In this paper, we introduce a privacy-preserving bandit framework based on homomorphic encryption which allows computations using encrypted data. The algorithm only observes encrypted information (contexts and rewards) and has no ability to decrypt it. Leveraging the properties of homomorphic encryption, we show that despite the complexity of the setting, it is possible to solve linear contextual bandits over encrypted data with a $\widetilde{O}(d\sqrt{T})$ regret bound in any linear contextual bandit problem, while keeping data encrypted.
APA
Garcelon, E., Pirotta, M. & Perchet, V.. (2022). Encrypted Linear Contextual Bandit . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2519-2551 Available from https://proceedings.mlr.press/v151/garcelon22a.html.

Related Material