SecureNets: Secure Inference of Deep Neural Networks on an Untrusted Cloud


Xuhui Chen, Jinlong Ji, Lixing Yu, Changqing Luo, Pan Li ;
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:646-661, 2018.


Inference using deep neural networks may be outsourced to the cloud due to its high computational cost, which, however, raises security concerns. Particularly, the data involved in deep neural networks can be highly sensitive, such as in medical, financial, commercial applications, and hence should be kept private. Besides, the deep neural network models owned by research institutions or commercial companies are their valuable intellectual properties and can contain proprietary information, which should be protected as well. Moreover, an untrusted cloud service provider may return accurate and even erroneous computing results. To address the above issues, we propose a secure outsourcing framework for deep neural network inference called SecureNets, which can preserve both a user’s data privacy and his/her neural network model privacy, and also verify the computation results returned by the cloud. Specifically, we employ a secure matrix transformation scheme in SecureNets to avoid privacy leakage of the data and the model. Meanwhile, we propose a verification method that can efficiently verify the correctness of cloud computing results. Our simulation results on four- and five-layer deep neural networks demonstrate that SecureNets can reduce the processing runtime by up to $64%$. Compared with CryptoNets, one of the previous schemes, SecureNets can increase the throughput by $104.45%$ while reducing the data transmission size by $69.78%$ per instance.

Related Material