Differentially Private Representation Learning via Image Captioning

Tom Sander, Yaodong Yu, Maziar Sanjabi, Alain Oliviero Durmus, Yi Ma, Kamalika Chaudhuri, Chuan Guo
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43255-43275, 2024.

Abstract

Differentially private (DP) machine learning is considered the gold-standard solution for training a model from sensitive data while still preserving privacy. However, a major barrier to achieving this ideal is its sub-optimal privacy-accuracy trade-off, which is particularly visible in DP representation learning. Specifically, it has been shown that under modest privacy budgets, most models learn representations that are not significantly better than hand-crafted features. In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets. Through a series of engineering tricks, we successfully train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch using a reasonable amount of computation, and obtaining unprecedented high-quality image features that can be used in a variety of downstream vision and vision-language tasks. For example, under a privacy budget of $\varepsilon=8$ for the LAION dataset, a linear classifier trained on top of learned DP-Cap features attains $65.8%$ accuracy on ImageNet-1K, considerably improving the previous SOTA of $56.5%$. Our work challenges the prevailing sentiment that high-utility DP representation learning cannot be achieved by training from scratch.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-sander24b, title = {Differentially Private Representation Learning via Image Captioning}, author = {Sander, Tom and Yu, Yaodong and Sanjabi, Maziar and Oliviero Durmus, Alain and Ma, Yi and Chaudhuri, Kamalika and Guo, Chuan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43255--43275}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/sander24b/sander24b.pdf}, url = {https://proceedings.mlr.press/v235/sander24b.html}, abstract = {Differentially private (DP) machine learning is considered the gold-standard solution for training a model from sensitive data while still preserving privacy. However, a major barrier to achieving this ideal is its sub-optimal privacy-accuracy trade-off, which is particularly visible in DP representation learning. Specifically, it has been shown that under modest privacy budgets, most models learn representations that are not significantly better than hand-crafted features. In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets. Through a series of engineering tricks, we successfully train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch using a reasonable amount of computation, and obtaining unprecedented high-quality image features that can be used in a variety of downstream vision and vision-language tasks. For example, under a privacy budget of $\varepsilon=8$ for the LAION dataset, a linear classifier trained on top of learned DP-Cap features attains $65.8%$ accuracy on ImageNet-1K, considerably improving the previous SOTA of $56.5%$. Our work challenges the prevailing sentiment that high-utility DP representation learning cannot be achieved by training from scratch.} }
Endnote
%0 Conference Paper %T Differentially Private Representation Learning via Image Captioning %A Tom Sander %A Yaodong Yu %A Maziar Sanjabi %A Alain Oliviero Durmus %A Yi Ma %A Kamalika Chaudhuri %A Chuan Guo %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-sander24b %I PMLR %P 43255--43275 %U https://proceedings.mlr.press/v235/sander24b.html %V 235 %X Differentially private (DP) machine learning is considered the gold-standard solution for training a model from sensitive data while still preserving privacy. However, a major barrier to achieving this ideal is its sub-optimal privacy-accuracy trade-off, which is particularly visible in DP representation learning. Specifically, it has been shown that under modest privacy budgets, most models learn representations that are not significantly better than hand-crafted features. In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets. Through a series of engineering tricks, we successfully train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch using a reasonable amount of computation, and obtaining unprecedented high-quality image features that can be used in a variety of downstream vision and vision-language tasks. For example, under a privacy budget of $\varepsilon=8$ for the LAION dataset, a linear classifier trained on top of learned DP-Cap features attains $65.8%$ accuracy on ImageNet-1K, considerably improving the previous SOTA of $56.5%$. Our work challenges the prevailing sentiment that high-utility DP representation learning cannot be achieved by training from scratch.
APA
Sander, T., Yu, Y., Sanjabi, M., Oliviero Durmus, A., Ma, Y., Chaudhuri, K. & Guo, C.. (2024). Differentially Private Representation Learning via Image Captioning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43255-43275 Available from https://proceedings.mlr.press/v235/sander24b.html.

Related Material