It’s My Data Too: Private ML for Datasets with Multi-User Training Examples

Arun Ganesh, Ryan Mckenna, Hugh Brendan Mcmahan, Adam Smith, Fan Wu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18189-18205, 2025.

Abstract

We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ganesh25a, title = {It’s My Data Too: Private {ML} for Datasets with Multi-User Training Examples}, author = {Ganesh, Arun and Mckenna, Ryan and Mcmahan, Hugh Brendan and Smith, Adam and Wu, Fan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18189--18205}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ganesh25a/ganesh25a.pdf}, url = {https://proceedings.mlr.press/v267/ganesh25a.html}, abstract = {We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.} }
Endnote
%0 Conference Paper %T It’s My Data Too: Private ML for Datasets with Multi-User Training Examples %A Arun Ganesh %A Ryan Mckenna %A Hugh Brendan Mcmahan %A Adam Smith %A Fan Wu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ganesh25a %I PMLR %P 18189--18205 %U https://proceedings.mlr.press/v267/ganesh25a.html %V 267 %X We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.
APA
Ganesh, A., Mckenna, R., Mcmahan, H.B., Smith, A. & Wu, F.. (2025). It’s My Data Too: Private ML for Datasets with Multi-User Training Examples. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18189-18205 Available from https://proceedings.mlr.press/v267/ganesh25a.html.

Related Material