[edit]
Noisy adversarial representation learning for effective and efficient image obfuscation
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:953-962, 2023.
Abstract
Recent real-world applications of deep learning have led to the development of machine learning as a service (MLaaS). However, the scenario of client-server inference presents privacy concerns, where the server processes raw data sent from the user’s client device. One solution to this issue is to provide an obfuscator function to the client device using Adversarial Representation Learning (ARL). Prior works have primarily focused on the privacy-utility trade-off while overlooking the computational cost and memory burden on the client side. In this paper, we propose an effective and efficient ARL method that incorporates feature noise into the ARL pipeline. We evaluated our approach on various datasets, comparing it with state-of-the-art ARL techniques. Our experimental results indicate that our method achieves better accuracy, lower computation and memory overheads, and improved resistance to information leakage and reconstruction attacks.