A General Framework For Detecting Anomalous Inputs to DNN Classifiers

Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8764-8775, 2021.

Abstract

Detecting anomalous inputs, such as adversarial and out-of-distribution (OOD) inputs, is critical for classifiers (including deep neural networks or DNNs) deployed in real-world applications. While prior works have proposed various methods to detect such anomalous samples using information from the internal layer representations of a DNN, there is a lack of consensus on a principled approach for the different components of such a detection method. As a result, often heuristic and one-off methods are applied for different aspects of this problem. We propose an unsupervised anomaly detection framework based on the internal DNN layer representations in the form of a meta-algorithm with configurable components. We proceed to propose specific instantiations for each component of the meta-algorithm based on ideas grounded in statistical testing and anomaly detection. We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs, including an adaptive attack that uses the internal layer representations of the DNN (often not considered in prior work). Comparisons with five recently-proposed competing detection methods demonstrates the effectiveness of our method in detecting adversarial and OOD inputs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-raghuram21a, title = {A General Framework For Detecting Anomalous Inputs to DNN Classifiers}, author = {Raghuram, Jayaram and Chandrasekaran, Varun and Jha, Somesh and Banerjee, Suman}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8764--8775}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/raghuram21a/raghuram21a.pdf}, url = {https://proceedings.mlr.press/v139/raghuram21a.html}, abstract = {Detecting anomalous inputs, such as adversarial and out-of-distribution (OOD) inputs, is critical for classifiers (including deep neural networks or DNNs) deployed in real-world applications. While prior works have proposed various methods to detect such anomalous samples using information from the internal layer representations of a DNN, there is a lack of consensus on a principled approach for the different components of such a detection method. As a result, often heuristic and one-off methods are applied for different aspects of this problem. We propose an unsupervised anomaly detection framework based on the internal DNN layer representations in the form of a meta-algorithm with configurable components. We proceed to propose specific instantiations for each component of the meta-algorithm based on ideas grounded in statistical testing and anomaly detection. We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs, including an adaptive attack that uses the internal layer representations of the DNN (often not considered in prior work). Comparisons with five recently-proposed competing detection methods demonstrates the effectiveness of our method in detecting adversarial and OOD inputs.} }
Endnote
%0 Conference Paper %T A General Framework For Detecting Anomalous Inputs to DNN Classifiers %A Jayaram Raghuram %A Varun Chandrasekaran %A Somesh Jha %A Suman Banerjee %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-raghuram21a %I PMLR %P 8764--8775 %U https://proceedings.mlr.press/v139/raghuram21a.html %V 139 %X Detecting anomalous inputs, such as adversarial and out-of-distribution (OOD) inputs, is critical for classifiers (including deep neural networks or DNNs) deployed in real-world applications. While prior works have proposed various methods to detect such anomalous samples using information from the internal layer representations of a DNN, there is a lack of consensus on a principled approach for the different components of such a detection method. As a result, often heuristic and one-off methods are applied for different aspects of this problem. We propose an unsupervised anomaly detection framework based on the internal DNN layer representations in the form of a meta-algorithm with configurable components. We proceed to propose specific instantiations for each component of the meta-algorithm based on ideas grounded in statistical testing and anomaly detection. We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs, including an adaptive attack that uses the internal layer representations of the DNN (often not considered in prior work). Comparisons with five recently-proposed competing detection methods demonstrates the effectiveness of our method in detecting adversarial and OOD inputs.
APA
Raghuram, J., Chandrasekaran, V., Jha, S. & Banerjee, S.. (2021). A General Framework For Detecting Anomalous Inputs to DNN Classifiers. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8764-8775 Available from https://proceedings.mlr.press/v139/raghuram21a.html.

Related Material