Anomaly Detection With Multiple-Hypotheses Predictions

Duc Tam Nguyen, Zhongyu Lou, Michael Klar, Thomas Brox
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4800-4809, 2019.

Abstract

In one-class-learning tasks, only the normal case (foreground) can be modeled with data, whereas the variation of all possible anomalies is too erratic to be described by samples. Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models,which attempt to learn the input density of the foreground, are used. However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution of the foreground more efficiently with a multi-hypotheses autoencoder. Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses. Our multiple-hypotheses-based anomaly detection framework allows the reliable identification of out-of-distribution samples. For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results. On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-nguyen19b, title = {Anomaly Detection With Multiple-Hypotheses Predictions}, author = {Nguyen, Duc Tam and Lou, Zhongyu and Klar, Michael and Brox, Thomas}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4800--4809}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/nguyen19b/nguyen19b.pdf}, url = {https://proceedings.mlr.press/v97/nguyen19b.html}, abstract = {In one-class-learning tasks, only the normal case (foreground) can be modeled with data, whereas the variation of all possible anomalies is too erratic to be described by samples. Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models,which attempt to learn the input density of the foreground, are used. However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution of the foreground more efficiently with a multi-hypotheses autoencoder. Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses. Our multiple-hypotheses-based anomaly detection framework allows the reliable identification of out-of-distribution samples. For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results. On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.} }
Endnote
%0 Conference Paper %T Anomaly Detection With Multiple-Hypotheses Predictions %A Duc Tam Nguyen %A Zhongyu Lou %A Michael Klar %A Thomas Brox %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-nguyen19b %I PMLR %P 4800--4809 %U https://proceedings.mlr.press/v97/nguyen19b.html %V 97 %X In one-class-learning tasks, only the normal case (foreground) can be modeled with data, whereas the variation of all possible anomalies is too erratic to be described by samples. Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models,which attempt to learn the input density of the foreground, are used. However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners.We propose to learn the data distribution of the foreground more efficiently with a multi-hypotheses autoencoder. Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses. Our multiple-hypotheses-based anomaly detection framework allows the reliable identification of out-of-distribution samples. For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results. On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%.
APA
Nguyen, D.T., Lou, Z., Klar, M. & Brox, T.. (2019). Anomaly Detection With Multiple-Hypotheses Predictions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4800-4809 Available from https://proceedings.mlr.press/v97/nguyen19b.html.

Related Material