Detecting and Repairing Deviated Outputs of Compressed Models

Yichen Li, Qi Pang, Dongwei Xiao, Zhibo Liu, Shuai Wang
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:707-722, 2024.

Abstract

With the rapid development of deep learning and its pervasive usage on various low-power and resource-constrained devices, model compression methods are increasingly used to reduce the model size and computation cost. Despite the overall high test accuracy of the compressed models, our observation shows that an original model and its compressed version (e.g., via quantization) can have deviated prediction outputs on the same inputs. These behavior deviations on compressed models are undesirable, given that the compressed models may be used in reliability-critical scenarios such as automated manufacturing and robotics systems. Inspired by software engineering practices, this paper proposes CompD, a differential testing (DT)-based framework for detecting and repairing prediction deviations on compressed models and their plaintext versions. CompD treats original/compressed models as “black-box,” thus offering an efficient method orthogonal to specific different compression schemes. Furthermore, CompD can leverage deviation-triggering inputs to finetune the compressed models, largely “repairing” their defects. Evaluations show that CompD can effectively test and repair common models compressed by different schemes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-li24b, title = {Detecting and Repairing Deviated Outputs of Compressed Models}, author = {Li, Yichen and Pang, Qi and Xiao, Dongwei and Liu, Zhibo and Wang, Shuai}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {707--722}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/li24b/li24b.pdf}, url = {https://proceedings.mlr.press/v222/li24b.html}, abstract = {With the rapid development of deep learning and its pervasive usage on various low-power and resource-constrained devices, model compression methods are increasingly used to reduce the model size and computation cost. Despite the overall high test accuracy of the compressed models, our observation shows that an original model and its compressed version (e.g., via quantization) can have deviated prediction outputs on the same inputs. These behavior deviations on compressed models are undesirable, given that the compressed models may be used in reliability-critical scenarios such as automated manufacturing and robotics systems. Inspired by software engineering practices, this paper proposes CompD, a differential testing (DT)-based framework for detecting and repairing prediction deviations on compressed models and their plaintext versions. CompD treats original/compressed models as “black-box,” thus offering an efficient method orthogonal to specific different compression schemes. Furthermore, CompD can leverage deviation-triggering inputs to finetune the compressed models, largely “repairing” their defects. Evaluations show that CompD can effectively test and repair common models compressed by different schemes.} }
Endnote
%0 Conference Paper %T Detecting and Repairing Deviated Outputs of Compressed Models %A Yichen Li %A Qi Pang %A Dongwei Xiao %A Zhibo Liu %A Shuai Wang %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-li24b %I PMLR %P 707--722 %U https://proceedings.mlr.press/v222/li24b.html %V 222 %X With the rapid development of deep learning and its pervasive usage on various low-power and resource-constrained devices, model compression methods are increasingly used to reduce the model size and computation cost. Despite the overall high test accuracy of the compressed models, our observation shows that an original model and its compressed version (e.g., via quantization) can have deviated prediction outputs on the same inputs. These behavior deviations on compressed models are undesirable, given that the compressed models may be used in reliability-critical scenarios such as automated manufacturing and robotics systems. Inspired by software engineering practices, this paper proposes CompD, a differential testing (DT)-based framework for detecting and repairing prediction deviations on compressed models and their plaintext versions. CompD treats original/compressed models as “black-box,” thus offering an efficient method orthogonal to specific different compression schemes. Furthermore, CompD can leverage deviation-triggering inputs to finetune the compressed models, largely “repairing” their defects. Evaluations show that CompD can effectively test and repair common models compressed by different schemes.
APA
Li, Y., Pang, Q., Xiao, D., Liu, Z. & Wang, S.. (2024). Detecting and Repairing Deviated Outputs of Compressed Models. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:707-722 Available from https://proceedings.mlr.press/v222/li24b.html.

Related Material