SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud

Xuhui Chen, Jinlong Ji, Lixing Yu, Changqing Luo, Pan Li
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:646-661, 2018.

Abstract

Inference using deep neural networks may be outsourced to the cloud due to its high computational cost, which, however, raises security concerns. Particularly, the data involved in deep neural networks can be highly sensitive, such as in medical, financial, commercial applications, and hence should be kept private. Besides, the deep neural network models owned by research institutions or commercial companies are their valuable intellectual properties and can contain proprietary information, which should be protected as well. Moreover, an untrusted cloud service provider may return accurate and even erroneous computing results. To address the above issues, we propose a secure outsourcing framework for deep neural network inference called SecureNets, which can preserve both a user’s data privacy and his/her neural network model privacy, and also verify the computation results returned by the cloud. Specifically, we employ a secure matrix transformation scheme in SecureNets to avoid privacy leakage of the data and the model. Meanwhile, we propose a verification method that can efficiently verify the correctness of cloud computing results. Our simulation results on four- and five-layer deep neural networks demonstrate that SecureNets can reduce the processing runtime by up to $64%$. Compared with CryptoNets, one of the previous schemes, SecureNets can increase the throughput by $104.45%$ while reducing the data transmission size by $69.78%$ per instance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-chen18a, title = {SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud}, author = {Chen, Xuhui and Ji, Jinlong and Yu, Lixing and Luo, Changqing and Li, Pan}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {646--661}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/chen18a/chen18a.pdf}, url = {https://proceedings.mlr.press/v95/chen18a.html}, abstract = {Inference using deep neural networks may be outsourced to the cloud due to its high computational cost, which, however, raises security concerns. Particularly, the data involved in deep neural networks can be highly sensitive, such as in medical, financial, commercial applications, and hence should be kept private. Besides, the deep neural network models owned by research institutions or commercial companies are their valuable intellectual properties and can contain proprietary information, which should be protected as well. Moreover, an untrusted cloud service provider may return accurate and even erroneous computing results. To address the above issues, we propose a secure outsourcing framework for deep neural network inference called SecureNets, which can preserve both a user’s data privacy and his/her neural network model privacy, and also verify the computation results returned by the cloud. Specifically, we employ a secure matrix transformation scheme in SecureNets to avoid privacy leakage of the data and the model. Meanwhile, we propose a verification method that can efficiently verify the correctness of cloud computing results. Our simulation results on four- and five-layer deep neural networks demonstrate that SecureNets can reduce the processing runtime by up to $64%$. Compared with CryptoNets, one of the previous schemes, SecureNets can increase the throughput by $104.45%$ while reducing the data transmission size by $69.78%$ per instance.} }
Endnote
%0 Conference Paper %T SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud %A Xuhui Chen %A Jinlong Ji %A Lixing Yu %A Changqing Luo %A Pan Li %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-chen18a %I PMLR %P 646--661 %U https://proceedings.mlr.press/v95/chen18a.html %V 95 %X Inference using deep neural networks may be outsourced to the cloud due to its high computational cost, which, however, raises security concerns. Particularly, the data involved in deep neural networks can be highly sensitive, such as in medical, financial, commercial applications, and hence should be kept private. Besides, the deep neural network models owned by research institutions or commercial companies are their valuable intellectual properties and can contain proprietary information, which should be protected as well. Moreover, an untrusted cloud service provider may return accurate and even erroneous computing results. To address the above issues, we propose a secure outsourcing framework for deep neural network inference called SecureNets, which can preserve both a user’s data privacy and his/her neural network model privacy, and also verify the computation results returned by the cloud. Specifically, we employ a secure matrix transformation scheme in SecureNets to avoid privacy leakage of the data and the model. Meanwhile, we propose a verification method that can efficiently verify the correctness of cloud computing results. Our simulation results on four- and five-layer deep neural networks demonstrate that SecureNets can reduce the processing runtime by up to $64%$. Compared with CryptoNets, one of the previous schemes, SecureNets can increase the throughput by $104.45%$ while reducing the data transmission size by $69.78%$ per instance.
APA
Chen, X., Ji, J., Yu, L., Luo, C. & Li, P.. (2018). SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:646-661 Available from https://proceedings.mlr.press/v95/chen18a.html.

Related Material