Position: Contextual Integrity is Inadequately Applied to Language Models

Yan Shvartzshnaider, Vasisht Duddu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:82200-82210, 2025.

Abstract

Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI theory emphasizes sharing information in accordance with privacy norms and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory’s fundamental tenets. Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-shvartzshnaider25a, title = {Position: Contextual Integrity is Inadequately Applied to Language Models}, author = {Shvartzshnaider, Yan and Duddu, Vasisht}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {82200--82210}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/shvartzshnaider25a/shvartzshnaider25a.pdf}, url = {https://proceedings.mlr.press/v267/shvartzshnaider25a.html}, abstract = {Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI theory emphasizes sharing information in accordance with privacy norms and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory’s fundamental tenets. Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).} }
Endnote
%0 Conference Paper %T Position: Contextual Integrity is Inadequately Applied to Language Models %A Yan Shvartzshnaider %A Vasisht Duddu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-shvartzshnaider25a %I PMLR %P 82200--82210 %U https://proceedings.mlr.press/v267/shvartzshnaider25a.html %V 267 %X Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI theory emphasizes sharing information in accordance with privacy norms and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory’s fundamental tenets. Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).
APA
Shvartzshnaider, Y. & Duddu, V.. (2025). Position: Contextual Integrity is Inadequately Applied to Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:82200-82210 Available from https://proceedings.mlr.press/v267/shvartzshnaider25a.html.

Related Material