Mitigating Bias in Adaptive Data Gathering via Differential Privacy

Seth Neel, Aaron Roth
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3720-3729, 2018.

Abstract

Data that is gathered adaptively — via bandit algorithms, for example — exhibits bias. This is true both when gathering simple numeric valued data — the empirical means kept track of by stochastic bandit algorithms are biased downwards — and when gathering more complicated data — running hypothesis tests on complex data gathered via contextual bandit algorithms leads to false discovery. In this paper, we show that this problem is mitigated if the data collection procedure is differentially private. This lets us both bound the bias of simple numeric valued quantities (like the empirical means of stochastic bandit algorithms), and correct the p-values of hypothesis tests run on the adaptively gathered data. Moreover, there exist differentially private bandit algorithms with near optimal regret bounds: we apply existing theorems in the simple stochastic case, and give a new analysis for linear contextual bandits. We complement our theoretical results with experiments validating our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-neel18a, title = {Mitigating Bias in Adaptive Data Gathering via Differential Privacy}, author = {Neel, Seth and Roth, Aaron}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3720--3729}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/neel18a/neel18a.pdf}, url = {https://proceedings.mlr.press/v80/neel18a.html}, abstract = {Data that is gathered adaptively — via bandit algorithms, for example — exhibits bias. This is true both when gathering simple numeric valued data — the empirical means kept track of by stochastic bandit algorithms are biased downwards — and when gathering more complicated data — running hypothesis tests on complex data gathered via contextual bandit algorithms leads to false discovery. In this paper, we show that this problem is mitigated if the data collection procedure is differentially private. This lets us both bound the bias of simple numeric valued quantities (like the empirical means of stochastic bandit algorithms), and correct the p-values of hypothesis tests run on the adaptively gathered data. Moreover, there exist differentially private bandit algorithms with near optimal regret bounds: we apply existing theorems in the simple stochastic case, and give a new analysis for linear contextual bandits. We complement our theoretical results with experiments validating our theory.} }
Endnote
%0 Conference Paper %T Mitigating Bias in Adaptive Data Gathering via Differential Privacy %A Seth Neel %A Aaron Roth %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-neel18a %I PMLR %P 3720--3729 %U https://proceedings.mlr.press/v80/neel18a.html %V 80 %X Data that is gathered adaptively — via bandit algorithms, for example — exhibits bias. This is true both when gathering simple numeric valued data — the empirical means kept track of by stochastic bandit algorithms are biased downwards — and when gathering more complicated data — running hypothesis tests on complex data gathered via contextual bandit algorithms leads to false discovery. In this paper, we show that this problem is mitigated if the data collection procedure is differentially private. This lets us both bound the bias of simple numeric valued quantities (like the empirical means of stochastic bandit algorithms), and correct the p-values of hypothesis tests run on the adaptively gathered data. Moreover, there exist differentially private bandit algorithms with near optimal regret bounds: we apply existing theorems in the simple stochastic case, and give a new analysis for linear contextual bandits. We complement our theoretical results with experiments validating our theory.
APA
Neel, S. & Roth, A.. (2018). Mitigating Bias in Adaptive Data Gathering via Differential Privacy. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3720-3729 Available from https://proceedings.mlr.press/v80/neel18a.html.

Related Material