On the Privacy Risks of Algorithmic Recourse

Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:9680-9696, 2023.

Abstract

As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model’s training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-pawelczyk23a, title = {On the Privacy Risks of Algorithmic Recourse}, author = {Pawelczyk, Martin and Lakkaraju, Himabindu and Neel, Seth}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {9680--9696}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/pawelczyk23a/pawelczyk23a.pdf}, url = {https://proceedings.mlr.press/v206/pawelczyk23a.html}, abstract = {As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model’s training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.} }
Endnote
%0 Conference Paper %T On the Privacy Risks of Algorithmic Recourse %A Martin Pawelczyk %A Himabindu Lakkaraju %A Seth Neel %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-pawelczyk23a %I PMLR %P 9680--9696 %U https://proceedings.mlr.press/v206/pawelczyk23a.html %V 206 %X As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model’s training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
APA
Pawelczyk, M., Lakkaraju, H. & Neel, S.. (2023). On the Privacy Risks of Algorithmic Recourse. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:9680-9696 Available from https://proceedings.mlr.press/v206/pawelczyk23a.html.

Related Material