Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments

Tianchen Ji, Sri Theja Vuppala, Girish Chowdhary, Katherine Driggs-Campbell
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1443-1455, 2021.

Abstract

To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-ji21a, title = {Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments}, author = {Ji, Tianchen and Vuppala, Sri Theja and Chowdhary, Girish and Driggs-Campbell, Katherine}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1443--1455}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/ji21a/ji21a.pdf}, url = {https://proceedings.mlr.press/v155/ji21a.html}, abstract = {To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.} }
Endnote
%0 Conference Paper %T Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments %A Tianchen Ji %A Sri Theja Vuppala %A Girish Chowdhary %A Katherine Driggs-Campbell %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-ji21a %I PMLR %P 1443--1455 %U https://proceedings.mlr.press/v155/ji21a.html %V 155 %X To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.
APA
Ji, T., Vuppala, S.T., Chowdhary, G. & Driggs-Campbell, K.. (2021). Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1443-1455 Available from https://proceedings.mlr.press/v155/ji21a.html.

Related Material