Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen
Proceedings of The 2nd Conference on Lifelong Learning Agents, PMLR 232:162-183, 2023.

Abstract

Machine learning methods must be trusted to make appropriate decisions in real-world environments, even when faced with out-of-distribution (OOD) samples. Many current approaches simply aim to detect OOD examples and alert the user when an unrecognized input is given. However, when the OOD sample significantly overlaps with the training data, a binary anomaly detection is not interpretable or explainable, and provides little information to the user. We propose a new model for OOD detection that makes predictions at varying levels of granularity—as the inputs become more ambiguous, the model predictions become coarser and more conservative. Consider an animal classifier that encounters an unknown bird species and a car. Both cases are OOD, but the user gains more information if the classifier recognizes that its uncertainty over the particular species is too large and predicts “bird” instead of detecting it as OOD. Furthermore, we diagnose the classifier’s performance at each level of the hierarchy improving the explainability and interpretability of the model’s predictions. We demonstrate the effectiveness of hierarchical classifiers for both fine- and coarse-grained OOD tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v232-linderman23a, title = {Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification}, author = {Linderman, Randolph and Zhang, Jingyang and Inkawhich, Nathan and Li, Hai and Chen, Yiran}, booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents}, pages = {162--183}, year = {2023}, editor = {Chandar, Sarath and Pascanu, Razvan and Sedghi, Hanie and Precup, Doina}, volume = {232}, series = {Proceedings of Machine Learning Research}, month = {22--25 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v232/linderman23a/linderman23a.pdf}, url = {https://proceedings.mlr.press/v232/linderman23a.html}, abstract = {Machine learning methods must be trusted to make appropriate decisions in real-world environments, even when faced with out-of-distribution (OOD) samples. Many current approaches simply aim to detect OOD examples and alert the user when an unrecognized input is given. However, when the OOD sample significantly overlaps with the training data, a binary anomaly detection is not interpretable or explainable, and provides little information to the user. We propose a new model for OOD detection that makes predictions at varying levels of granularity—as the inputs become more ambiguous, the model predictions become coarser and more conservative. Consider an animal classifier that encounters an unknown bird species and a car. Both cases are OOD, but the user gains more information if the classifier recognizes that its uncertainty over the particular species is too large and predicts “bird” instead of detecting it as OOD. Furthermore, we diagnose the classifier’s performance at each level of the hierarchy improving the explainability and interpretability of the model’s predictions. We demonstrate the effectiveness of hierarchical classifiers for both fine- and coarse-grained OOD tasks.} }
Endnote
%0 Conference Paper %T Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification %A Randolph Linderman %A Jingyang Zhang %A Nathan Inkawhich %A Hai Li %A Yiran Chen %B Proceedings of The 2nd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2023 %E Sarath Chandar %E Razvan Pascanu %E Hanie Sedghi %E Doina Precup %F pmlr-v232-linderman23a %I PMLR %P 162--183 %U https://proceedings.mlr.press/v232/linderman23a.html %V 232 %X Machine learning methods must be trusted to make appropriate decisions in real-world environments, even when faced with out-of-distribution (OOD) samples. Many current approaches simply aim to detect OOD examples and alert the user when an unrecognized input is given. However, when the OOD sample significantly overlaps with the training data, a binary anomaly detection is not interpretable or explainable, and provides little information to the user. We propose a new model for OOD detection that makes predictions at varying levels of granularity—as the inputs become more ambiguous, the model predictions become coarser and more conservative. Consider an animal classifier that encounters an unknown bird species and a car. Both cases are OOD, but the user gains more information if the classifier recognizes that its uncertainty over the particular species is too large and predicts “bird” instead of detecting it as OOD. Furthermore, we diagnose the classifier’s performance at each level of the hierarchy improving the explainability and interpretability of the model’s predictions. We demonstrate the effectiveness of hierarchical classifiers for both fine- and coarse-grained OOD tasks.
APA
Linderman, R., Zhang, J., Inkawhich, N., Li, H. & Chen, Y.. (2023). Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification. Proceedings of The 2nd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 232:162-183 Available from https://proceedings.mlr.press/v232/linderman23a.html.

Related Material