Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:43068-43104, 2023.

Abstract

Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: https://github.com/tmlr-group/Unleashing-Mask.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zhu23g, title = {Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability}, author = {Zhu, Jianing and Li, Hengzhuang and Yao, Jiangchao and Liu, Tongliang and Xu, Jianliang and Han, Bo}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {43068--43104}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zhu23g/zhu23g.pdf}, url = {https://proceedings.mlr.press/v202/zhu23g.html}, abstract = {Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: https://github.com/tmlr-group/Unleashing-Mask.} }
Endnote
%0 Conference Paper %T Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability %A Jianing Zhu %A Hengzhuang Li %A Jiangchao Yao %A Tongliang Liu %A Jianliang Xu %A Bo Han %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zhu23g %I PMLR %P 43068--43104 %U https://proceedings.mlr.press/v202/zhu23g.html %V 202 %X Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: https://github.com/tmlr-group/Unleashing-Mask.
APA
Zhu, J., Li, H., Yao, J., Liu, T., Xu, J. & Han, B.. (2023). Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:43068-43104 Available from https://proceedings.mlr.press/v202/zhu23g.html.

Related Material