Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection

Chieh-Hsin Lai, Dongmian Zou, Gilad Lerman
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3538-3567, 2023.

Abstract

We propose a new method for novelty detection that can tolerate high corruption of the training points, whereas previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to high corruption, we incorporate the following four changes to the common VAE: 1. Extracting crucial features of the latent code by a carefully designed dimension reduction component for distributions; 2. Modeling the latent distribution as a mixture of Gaussian low-rank inliers and full-rank outliers, where the testing only uses the inlier model; 3. Applying the Wasserstein-1 metric for regularization, instead of the Kullback-Leibler (KL) divergence; and 4. Using a robust error for reconstruction. We establish both robustness to outliers and suitability to low-rank modeling of the Wasserstein metric as opposed to the KL divergence. We illustrate state-of-the-art results on standard benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-lai23a, title = {Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection}, author = {Lai, Chieh-Hsin and Zou, Dongmian and Lerman, Gilad}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {3538--3567}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/lai23a/lai23a.pdf}, url = {https://proceedings.mlr.press/v206/lai23a.html}, abstract = {We propose a new method for novelty detection that can tolerate high corruption of the training points, whereas previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to high corruption, we incorporate the following four changes to the common VAE: 1. Extracting crucial features of the latent code by a carefully designed dimension reduction component for distributions; 2. Modeling the latent distribution as a mixture of Gaussian low-rank inliers and full-rank outliers, where the testing only uses the inlier model; 3. Applying the Wasserstein-1 metric for regularization, instead of the Kullback-Leibler (KL) divergence; and 4. Using a robust error for reconstruction. We establish both robustness to outliers and suitability to low-rank modeling of the Wasserstein metric as opposed to the KL divergence. We illustrate state-of-the-art results on standard benchmarks.} }
Endnote
%0 Conference Paper %T Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection %A Chieh-Hsin Lai %A Dongmian Zou %A Gilad Lerman %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-lai23a %I PMLR %P 3538--3567 %U https://proceedings.mlr.press/v206/lai23a.html %V 206 %X We propose a new method for novelty detection that can tolerate high corruption of the training points, whereas previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to high corruption, we incorporate the following four changes to the common VAE: 1. Extracting crucial features of the latent code by a carefully designed dimension reduction component for distributions; 2. Modeling the latent distribution as a mixture of Gaussian low-rank inliers and full-rank outliers, where the testing only uses the inlier model; 3. Applying the Wasserstein-1 metric for regularization, instead of the Kullback-Leibler (KL) divergence; and 4. Using a robust error for reconstruction. We establish both robustness to outliers and suitability to low-rank modeling of the Wasserstein metric as opposed to the KL divergence. We illustrate state-of-the-art results on standard benchmarks.
APA
Lai, C., Zou, D. & Lerman, G.. (2023). Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:3538-3567 Available from https://proceedings.mlr.press/v206/lai23a.html.

Related Material