Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment

Qizhang Feng, Siva Rajesh Kasa, SANTHOSH KUMAR KASA, Hyokun Yun, Choon Hui Teo, Sravan Babu Bodapati
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5221-5229, 2025.

Abstract

Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have enabled significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using two widely used methods - DPO and PPO - to membership inference attacks (MIAs). Our study has two main contributions: first, we theoretically motivate that DPO models are more vulnerable to MIA compared to PPO models; second, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (Preference data MIA). Using PREMIA and existing baselines we empirically show that DPO models have a relatively heightened vulnerability towards MIA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-feng25a, title = {Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment}, author = {Feng, Qizhang and Kasa, Siva Rajesh and KASA, SANTHOSH KUMAR and Yun, Hyokun and Teo, Choon Hui and Bodapati, Sravan Babu}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5221--5229}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/feng25a/feng25a.pdf}, url = {https://proceedings.mlr.press/v258/feng25a.html}, abstract = {Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have enabled significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using two widely used methods - DPO and PPO - to membership inference attacks (MIAs). Our study has two main contributions: first, we theoretically motivate that DPO models are more vulnerable to MIA compared to PPO models; second, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (Preference data MIA). Using PREMIA and existing baselines we empirically show that DPO models have a relatively heightened vulnerability towards MIA.} }
Endnote
%0 Conference Paper %T Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment %A Qizhang Feng %A Siva Rajesh Kasa %A SANTHOSH KUMAR KASA %A Hyokun Yun %A Choon Hui Teo %A Sravan Babu Bodapati %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-feng25a %I PMLR %P 5221--5229 %U https://proceedings.mlr.press/v258/feng25a.html %V 258 %X Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human standards. Methods such as Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) have enabled significant progress in refining LLMs using human preference data. However, the privacy concerns inherent in utilizing such preference data have yet to be adequately studied. In this paper, we investigate the vulnerability of LLMs aligned using two widely used methods - DPO and PPO - to membership inference attacks (MIAs). Our study has two main contributions: first, we theoretically motivate that DPO models are more vulnerable to MIA compared to PPO models; second, we introduce a novel reference-based attack framework specifically for analyzing preference data called PREMIA (Preference data MIA). Using PREMIA and existing baselines we empirically show that DPO models have a relatively heightened vulnerability towards MIA.
APA
Feng, Q., Kasa, S.R., KASA, S.K., Yun, H., Teo, C.H. & Bodapati, S.B.. (2025). Exposing Privacy Gaps: Membership Inference Attack on Preference Data for LLM Alignment. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5221-5229 Available from https://proceedings.mlr.press/v258/feng25a.html.

Related Material