GroupCover: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs

Zheng Zhang, Na Wang, Ziqi Zhang, Yao Zhang, Tianyi Zhang, Jianwei Liu, Ye Wu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:59992-60003, 2024.

Abstract

Due to the high cost of training DNN models, how to protect the intellectual property of DNN models, especially when the models are deployed to users’ devices, is becoming an important topic. One practical solution is to use Trusted Execution Environments (TEEs) and researchers have proposed various model obfuscation solutions to make full use of the high-security guarantee of TEEs and the high performance of collocated GPUs. In this paper, we first identify a common vulnerability, namely the fragility of randomness, that is shared by existing TEE-based model obfuscation solutions. This vulnerability benefits model-stealing attacks and allows the adversary to recover about 97% of the secret model. To improve the security of TEE-shielded DNN models, we further propose a new model obfuscation approach GroupCover, which uses sufficient randomization and mutual covering obfuscation to protect model weights. Experimental results demonstrate that GroupCover can achieve a comparable security level as the upper-bound (black-box protection), which is remarkably over 3x compared with existing solutions. Besides, GroupCover introduces 19% overhead and negligible accuracy loss compared to model unprotected scheme.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhang24bn, title = {{G}roup{C}over: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on {TEE}s}, author = {Zhang, Zheng and Wang, Na and Zhang, Ziqi and Zhang, Yao and Zhang, Tianyi and Liu, Jianwei and Wu, Ye}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {59992--60003}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24bn/zhang24bn.pdf}, url = {https://proceedings.mlr.press/v235/zhang24bn.html}, abstract = {Due to the high cost of training DNN models, how to protect the intellectual property of DNN models, especially when the models are deployed to users’ devices, is becoming an important topic. One practical solution is to use Trusted Execution Environments (TEEs) and researchers have proposed various model obfuscation solutions to make full use of the high-security guarantee of TEEs and the high performance of collocated GPUs. In this paper, we first identify a common vulnerability, namely the fragility of randomness, that is shared by existing TEE-based model obfuscation solutions. This vulnerability benefits model-stealing attacks and allows the adversary to recover about 97% of the secret model. To improve the security of TEE-shielded DNN models, we further propose a new model obfuscation approach GroupCover, which uses sufficient randomization and mutual covering obfuscation to protect model weights. Experimental results demonstrate that GroupCover can achieve a comparable security level as the upper-bound (black-box protection), which is remarkably over 3x compared with existing solutions. Besides, GroupCover introduces 19% overhead and negligible accuracy loss compared to model unprotected scheme.} }
Endnote
%0 Conference Paper %T GroupCover: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs %A Zheng Zhang %A Na Wang %A Ziqi Zhang %A Yao Zhang %A Tianyi Zhang %A Jianwei Liu %A Ye Wu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhang24bn %I PMLR %P 59992--60003 %U https://proceedings.mlr.press/v235/zhang24bn.html %V 235 %X Due to the high cost of training DNN models, how to protect the intellectual property of DNN models, especially when the models are deployed to users’ devices, is becoming an important topic. One practical solution is to use Trusted Execution Environments (TEEs) and researchers have proposed various model obfuscation solutions to make full use of the high-security guarantee of TEEs and the high performance of collocated GPUs. In this paper, we first identify a common vulnerability, namely the fragility of randomness, that is shared by existing TEE-based model obfuscation solutions. This vulnerability benefits model-stealing attacks and allows the adversary to recover about 97% of the secret model. To improve the security of TEE-shielded DNN models, we further propose a new model obfuscation approach GroupCover, which uses sufficient randomization and mutual covering obfuscation to protect model weights. Experimental results demonstrate that GroupCover can achieve a comparable security level as the upper-bound (black-box protection), which is remarkably over 3x compared with existing solutions. Besides, GroupCover introduces 19% overhead and negligible accuracy loss compared to model unprotected scheme.
APA
Zhang, Z., Wang, N., Zhang, Z., Zhang, Y., Zhang, T., Liu, J. & Wu, Y.. (2024). GroupCover: A Secure, Efficient and Scalable Inference Framework for On-device Model Protection based on TEEs. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:59992-60003 Available from https://proceedings.mlr.press/v235/zhang24bn.html.

Related Material