Adversarially Learned Representations for Information Obfuscation and Inference

Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Galen Reeves, Guillermo Sapiro
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:614-623, 2019.

Abstract

Data collection and sharing are pervasive aspects of modern society. This process can either be voluntary, as in the case of a person taking a facial image to unlock his/her phone, or incidental, such as traffic cameras collecting videos on pedestrians. An undesirable side effect of these processes is that shared data can carry information about attributes that users might consider as sensitive, even when such information is of limited use for the task. It is therefore desirable for both data collectors and users to design procedures that minimize sensitive information leakage. Balancing the competing objectives of providing meaningful individualized service levels and inference while obfuscating sensitive information is still an open problem. In this work, we take an information theoretic approach that is implemented as an unconstrained adversarial game between Deep Neural Networks in a principled, data-driven manner. This approach enables us to learn domain-preserving stochastic transformations that maintain performance on existing algorithms while minimizing sensitive information leakage.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-bertran19a, title = {Adversarially Learned Representations for Information Obfuscation and Inference}, author = {Bertran, Martin and Martinez, Natalia and Papadaki, Afroditi and Qiu, Qiang and Rodrigues, Miguel and Reeves, Galen and Sapiro, Guillermo}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {614--623}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/bertran19a/bertran19a.pdf}, url = {https://proceedings.mlr.press/v97/bertran19a.html}, abstract = {Data collection and sharing are pervasive aspects of modern society. This process can either be voluntary, as in the case of a person taking a facial image to unlock his/her phone, or incidental, such as traffic cameras collecting videos on pedestrians. An undesirable side effect of these processes is that shared data can carry information about attributes that users might consider as sensitive, even when such information is of limited use for the task. It is therefore desirable for both data collectors and users to design procedures that minimize sensitive information leakage. Balancing the competing objectives of providing meaningful individualized service levels and inference while obfuscating sensitive information is still an open problem. In this work, we take an information theoretic approach that is implemented as an unconstrained adversarial game between Deep Neural Networks in a principled, data-driven manner. This approach enables us to learn domain-preserving stochastic transformations that maintain performance on existing algorithms while minimizing sensitive information leakage.} }
Endnote
%0 Conference Paper %T Adversarially Learned Representations for Information Obfuscation and Inference %A Martin Bertran %A Natalia Martinez %A Afroditi Papadaki %A Qiang Qiu %A Miguel Rodrigues %A Galen Reeves %A Guillermo Sapiro %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-bertran19a %I PMLR %P 614--623 %U https://proceedings.mlr.press/v97/bertran19a.html %V 97 %X Data collection and sharing are pervasive aspects of modern society. This process can either be voluntary, as in the case of a person taking a facial image to unlock his/her phone, or incidental, such as traffic cameras collecting videos on pedestrians. An undesirable side effect of these processes is that shared data can carry information about attributes that users might consider as sensitive, even when such information is of limited use for the task. It is therefore desirable for both data collectors and users to design procedures that minimize sensitive information leakage. Balancing the competing objectives of providing meaningful individualized service levels and inference while obfuscating sensitive information is still an open problem. In this work, we take an information theoretic approach that is implemented as an unconstrained adversarial game between Deep Neural Networks in a principled, data-driven manner. This approach enables us to learn domain-preserving stochastic transformations that maintain performance on existing algorithms while minimizing sensitive information leakage.
APA
Bertran, M., Martinez, N., Papadaki, A., Qiu, Q., Rodrigues, M., Reeves, G. & Sapiro, G.. (2019). Adversarially Learned Representations for Information Obfuscation and Inference. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:614-623 Available from https://proceedings.mlr.press/v97/bertran19a.html.

Related Material