No-Regret Algorithms for Private Gaussian Process Bandit Optimization

Abhimanyu Dubey
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2062-2070, 2021.

Abstract

The widespread proliferation of data-driven decision-making has ushered in a recent interest in the design of privacy-preserving algorithms. In this paper, we consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics. We propose a solution for differentially private GP bandit optimization that combines uniform kernel approximation with random perturbations, providing a generic framework to create differentially-private (DP) Gaussian process bandit algorithms. For two specific DP settings - joint and local differential privacy, we provide algorithms based on efficient quadrature Fourier feature approximators, that are computationally efficient and provably no-regret for a class of stationary kernel functions. In contrast to previous work, our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely on the sample path for prediction, making them scalable and straightforward to release as well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-dubey21a, title = { No-Regret Algorithms for Private Gaussian Process Bandit Optimization }, author = {Dubey, Abhimanyu}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2062--2070}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/dubey21a/dubey21a.pdf}, url = {https://proceedings.mlr.press/v130/dubey21a.html}, abstract = { The widespread proliferation of data-driven decision-making has ushered in a recent interest in the design of privacy-preserving algorithms. In this paper, we consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics. We propose a solution for differentially private GP bandit optimization that combines uniform kernel approximation with random perturbations, providing a generic framework to create differentially-private (DP) Gaussian process bandit algorithms. For two specific DP settings - joint and local differential privacy, we provide algorithms based on efficient quadrature Fourier feature approximators, that are computationally efficient and provably no-regret for a class of stationary kernel functions. In contrast to previous work, our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely on the sample path for prediction, making them scalable and straightforward to release as well. } }
Endnote
%0 Conference Paper %T No-Regret Algorithms for Private Gaussian Process Bandit Optimization %A Abhimanyu Dubey %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-dubey21a %I PMLR %P 2062--2070 %U https://proceedings.mlr.press/v130/dubey21a.html %V 130 %X The widespread proliferation of data-driven decision-making has ushered in a recent interest in the design of privacy-preserving algorithms. In this paper, we consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics. We propose a solution for differentially private GP bandit optimization that combines uniform kernel approximation with random perturbations, providing a generic framework to create differentially-private (DP) Gaussian process bandit algorithms. For two specific DP settings - joint and local differential privacy, we provide algorithms based on efficient quadrature Fourier feature approximators, that are computationally efficient and provably no-regret for a class of stationary kernel functions. In contrast to previous work, our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely on the sample path for prediction, making them scalable and straightforward to release as well.
APA
Dubey, A.. (2021). No-Regret Algorithms for Private Gaussian Process Bandit Optimization . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2062-2070 Available from https://proceedings.mlr.press/v130/dubey21a.html.

Related Material