Skip to main content
Log in

RMVAE: one-class classification via divergence regularization and maximization mutual information

  • Regular Aricle
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

One-class classification aims to learn the classifier from only one class of data. Variational auto-encoder (VAE) has been widely used in it. Trained on the normal samples, all the images reconstructed by the VAE in the test stage are similar to the normal samples. Thus, the VAE can produce higher reconstruction errors for abnormal samples than normal ones, which can be used as a classification criterion. However, the VAE can reconstruct abnormal samples well and produce lower reconstruction errors due to the model generalization. It leads to the wrong classification for the normal images. To alleviate this shortcoming of the VAE, we propose to use mutual information module and divergence regularization to enhance the VAE. The new model is called RMVAE. Firstly, we refer to the idea of contrast learning to maximize the mutual information between the input image and the corresponding latent representation so that the encoder can express the unique characteristics of the normal class. Besides, the attention mechanism is used in the encoder to enhance the feature extraction capabilities of the model. Secondly, we introduce divergence regularization to make the latent representation of the normal samples evenly distributed in the latent space. Extensive experiments demonstrate that the proposed method achieves a better effect against other state-of-the-art methods on the three public benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Ruff, L., Vandermeulen, R., Goernitz N., et al.: Deep one-class classification. In: Proceedings of the International Conference on Machine Learning, pp. 4393–4402 (2018)

  2. Li, Z., Liu, G., Jiang, C.: Deep representation learning with full center loss for credit card fraud detection. IEEE Trans. Comput. Soc. Syst. 7(2), 569–579 (2020)

    Article  Google Scholar 

  3. Markovitz, A., Sharir, G., Friedman, I., et al.: Graph embedded pose clustering for anomaly detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10539–10547 (2020)

  4. Zong, B., Song, Q., Min, M.R., et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: Proceedings of the International Conference on Learning Representations (2018)

  5. Golan, I., El-Yaniv, R.: Deep anomaly detection using geometric transformations (2018). arXiv:1805.10917

  6. Abati, D., Porrello, A., Calderara, S., et al.: Latent space autoregression for novelty detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 481–490 (2019)

  7. Gong, D., Liu, L., Le, V., et al.: Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1705–1714 (2019)

  8. Perera, P., Nallapati ,R., Xiang, B.: Ocgan: One-class novelty detection using gans with constrained latent representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2898–2906 (2019)

  9. Akcay, S., Atapour-Abarghouei, A., Breckon, T.P.: Ganomaly: semi-supervised anomaly detection via adversarial training. In: Proceedings of the Asian Conference on Computer Vision, pp. 622–637 (2018)

  10. Schölkopf, B., Platt, J.C., Shawe-Taylor, J., et al.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)

    Article  Google Scholar 

  11. Tax, D.M.J., Duin, R.P.W.: Support vector data description. Mach. Learn. 54(1), 45–66 (2004)

    Article  Google Scholar 

  12. Pidhorskyi, S., Almohsen, R., Adjeroh, D.A., et al.: Generative probabilistic novelty detection with adversarial autoencoders (2018). arXiv:1807.02588

  13. Nguyen, D.T., Lou, Z., Klar, M., et al.: Anomaly detection with multiple-hypotheses predictions. In: Proceedings of the International Conference on Machine Learning, pp. 4800–4809 (2019)

  14. Sakurada, M., Yairi, T.: Anomaly detection using autoencoders with nonlinear dimensionality reduction. In: Proceedings of the MLSDA workshop on machine learning for sensory data analysis, pp. 4–11 (2014)

  15. Schlegl, T., Seeböck, P., Waldstein, S.M., et al.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: Proceedings of the International Conference on Information Processing in Medical imaging, pp. 146–157 (2017)

  16. Sabokrou, M., Khalooei, M., Fathy, M., et al.: Adversarially learned one-class classifier for novelty detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3379–3388 (2018)

  17. Wang, Q., Wu, B., Zhu, P., et al.: Eca-net: efficient channel attention for deep convolutional neural networks (2019). arXiv:1910.03151 [CoRR]

  18. Kwon, G., Prabhushankar, M., Temel, D., et al.: Backpropagated gradient representations for anomaly detection. In: Proceedings of the European Conference on Computer Vision, pp. 206–226 (2020)

  19. Xia, Y., Cao, X., Wen, F., et al.: Learning discriminative reconstructions for unsupervised outlier removal. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1511–1519 (2015)

  20. Breunig, M.M., Kriegel, H.P., Ng, R.T., et al.: LOF: identifying density-based local outliers. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 93–104 (2000)

  21. Oord, A., Kalchbrenner, N., Vinyals, O., et al.: Conditional image generation with pixelcnn decoders (2016). arXiv:1606.05328

  22. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes (2013). arXiv:1312.6114

  23. Abati, D., Porrello, A., Calderara, S., et al.: Latent space autoregression for novelty detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 481–490 (2019)

  24. Yan X., Zhang H., Xu X., et al.: Learning semantic context from normal samples for unsupervised anomaly detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3110–3118 (2021)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 62072238.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to LongQuan Dai.

Additional information

Communicated by B-K Bao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hong, C., Dai, L. RMVAE: one-class classification via divergence regularization and maximization mutual information. Multimedia Systems 28, 1667–1677 (2022). https://doi.org/10.1007/s00530-022-00932-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-022-00932-8

Keywords

Navigation