Secure Quantized Training for Deep Learning

Marcel Keller, Ke Sun
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10912-10938, 2022.

Abstract

We implement training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. We are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolutional and two dense layers to 99.2% accuracy in 3.5 hours (under one hour for 99% accuracy). We have also implemented AlexNet for CIFAR-10, which converges in a few hours. We develop novel protocols for exponentiation and inverse square root. Finally, we present experiments in a range of MPC security models for up to ten parties, both with honest and dishonest majority as well as semi-honest and malicious security.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-keller22a, title = {Secure Quantized Training for Deep Learning}, author = {Keller, Marcel and Sun, Ke}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10912--10938}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/keller22a/keller22a.pdf}, url = {https://proceedings.mlr.press/v162/keller22a.html}, abstract = {We implement training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. We are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolutional and two dense layers to 99.2% accuracy in 3.5 hours (under one hour for 99% accuracy). We have also implemented AlexNet for CIFAR-10, which converges in a few hours. We develop novel protocols for exponentiation and inverse square root. Finally, we present experiments in a range of MPC security models for up to ten parties, both with honest and dishonest majority as well as semi-honest and malicious security.} }
Endnote
%0 Conference Paper %T Secure Quantized Training for Deep Learning %A Marcel Keller %A Ke Sun %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-keller22a %I PMLR %P 10912--10938 %U https://proceedings.mlr.press/v162/keller22a.html %V 162 %X We implement training of neural networks in secure multi-party computation (MPC) using quantization commonly used in said setting. We are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolutional and two dense layers to 99.2% accuracy in 3.5 hours (under one hour for 99% accuracy). We have also implemented AlexNet for CIFAR-10, which converges in a few hours. We develop novel protocols for exponentiation and inverse square root. Finally, we present experiments in a range of MPC security models for up to ten parties, both with honest and dishonest majority as well as semi-honest and malicious security.
APA
Keller, M. & Sun, K.. (2022). Secure Quantized Training for Deep Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10912-10938 Available from https://proceedings.mlr.press/v162/keller22a.html.

Related Material