Privacy in Metalearning and Multitask Learning: Modeling and Separations

Maryam Aliakbarpour, Konstantina Bairaktari, Adam Smith, Marika Swanberg, Jonathan Ullman
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4096-4104, 2025.

Abstract

Model personalization allows a set of individuals, each facing a different learning task, to train models that are more accurate for each person than those they could develop individually. The goals of personalization are captured in a variety of formal frameworks, such as multitask learning and metalearning. Combining data for model personalization poses risks for privacy because the output of an individual’s model can depend on the data of other individuals. In this work we undertake a systematic study of differentially private personalized learning. Our first main contribution is to construct a taxonomy of formal frameworks for private personalized learning. This taxonomy captures different formal frameworks for learning as well as different threat models for the attacker. Our second main contribution is to prove separations between the personalized learning problems corresponding to different choices. In particular, we prove a novel separation between private multitask learning and private metalearning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-aliakbarpour25b, title = {Privacy in Metalearning and Multitask Learning: Modeling and Separations}, author = {Aliakbarpour, Maryam and Bairaktari, Konstantina and Smith, Adam and Swanberg, Marika and Ullman, Jonathan}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4096--4104}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/aliakbarpour25b/aliakbarpour25b.pdf}, url = {https://proceedings.mlr.press/v258/aliakbarpour25b.html}, abstract = {Model personalization allows a set of individuals, each facing a different learning task, to train models that are more accurate for each person than those they could develop individually. The goals of personalization are captured in a variety of formal frameworks, such as multitask learning and metalearning. Combining data for model personalization poses risks for privacy because the output of an individual’s model can depend on the data of other individuals. In this work we undertake a systematic study of differentially private personalized learning. Our first main contribution is to construct a taxonomy of formal frameworks for private personalized learning. This taxonomy captures different formal frameworks for learning as well as different threat models for the attacker. Our second main contribution is to prove separations between the personalized learning problems corresponding to different choices. In particular, we prove a novel separation between private multitask learning and private metalearning.} }
Endnote
%0 Conference Paper %T Privacy in Metalearning and Multitask Learning: Modeling and Separations %A Maryam Aliakbarpour %A Konstantina Bairaktari %A Adam Smith %A Marika Swanberg %A Jonathan Ullman %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-aliakbarpour25b %I PMLR %P 4096--4104 %U https://proceedings.mlr.press/v258/aliakbarpour25b.html %V 258 %X Model personalization allows a set of individuals, each facing a different learning task, to train models that are more accurate for each person than those they could develop individually. The goals of personalization are captured in a variety of formal frameworks, such as multitask learning and metalearning. Combining data for model personalization poses risks for privacy because the output of an individual’s model can depend on the data of other individuals. In this work we undertake a systematic study of differentially private personalized learning. Our first main contribution is to construct a taxonomy of formal frameworks for private personalized learning. This taxonomy captures different formal frameworks for learning as well as different threat models for the attacker. Our second main contribution is to prove separations between the personalized learning problems corresponding to different choices. In particular, we prove a novel separation between private multitask learning and private metalearning.
APA
Aliakbarpour, M., Bairaktari, K., Smith, A., Swanberg, M. & Ullman, J.. (2025). Privacy in Metalearning and Multitask Learning: Modeling and Separations. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4096-4104 Available from https://proceedings.mlr.press/v258/aliakbarpour25b.html.

Related Material