Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:30998-31014, 2024.

Abstract

Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method – Convex Concave Loss (CCL), which enables a high variance of training loss distribution by gradient descent. Our method is motivated by the theoretical analysis that convex losses tend to decrease the loss variance during training. Thus, our key idea behind CCL is to reduce the convexity of loss functions with a concave term. Trained with CCL, neural networks produce losses with high variance for training data, reinforcing the defense against MIAs. Extensive experiments demonstrate the superiority of CCL, achieving a state-of-the-art balance in the privacy-utility trade-off.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-liu24q, title = {Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss}, author = {Liu, Zhenlong and Feng, Lei and Zhuang, Huiping and Cao, Xiaofeng and Wei, Hongxin}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {30998--31014}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24q/liu24q.pdf}, url = {https://proceedings.mlr.press/v235/liu24q.html}, abstract = {Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method – Convex Concave Loss (CCL), which enables a high variance of training loss distribution by gradient descent. Our method is motivated by the theoretical analysis that convex losses tend to decrease the loss variance during training. Thus, our key idea behind CCL is to reduce the convexity of loss functions with a concave term. Trained with CCL, neural networks produce losses with high variance for training data, reinforcing the defense against MIAs. Extensive experiments demonstrate the superiority of CCL, achieving a state-of-the-art balance in the privacy-utility trade-off.} }
Endnote
%0 Conference Paper %T Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss %A Zhenlong Liu %A Lei Feng %A Huiping Zhuang %A Xiaofeng Cao %A Hongxin Wei %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-liu24q %I PMLR %P 30998--31014 %U https://proceedings.mlr.press/v235/liu24q.html %V 235 %X Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method – Convex Concave Loss (CCL), which enables a high variance of training loss distribution by gradient descent. Our method is motivated by the theoretical analysis that convex losses tend to decrease the loss variance during training. Thus, our key idea behind CCL is to reduce the convexity of loss functions with a concave term. Trained with CCL, neural networks produce losses with high variance for training data, reinforcing the defense against MIAs. Extensive experiments demonstrate the superiority of CCL, achieving a state-of-the-art balance in the privacy-utility trade-off.
APA
Liu, Z., Feng, L., Zhuang, H., Cao, X. & Wei, H.. (2024). Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:30998-31014 Available from https://proceedings.mlr.press/v235/liu24q.html.

Related Material