Skip to main content

Detecting Anomalies with \({{\textrm{Latent}}Out}\): Novel Scores, Architectures, and Settings

  • Conference paper
  • First Online:
Foundations of Intelligent Systems (ISMIS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13515))

Included in the following conference series:

Abstract

\({{\textrm{Latent}}Out}\) is a recently introduced algorithm for unsupervised anomaly detection which enhances latent space-based neural methods, namely (Variational) Autoencoders, GANomaly and ANOGan architectures. The main idea behind it is to exploit both the latent space and the baseline score of these architectures in order to provide a refined anomaly score performing density estimation in the augmented latent-space/baseline-score feature space. In this paper we extend the research on the \({{\textrm{Latent}}Out}\) methodology in three directions: first, we provide a novel score performing a different kind of density estimation at a reduced computational cost; second, we experiment the combination of \({{\textrm{Latent}}Out}\) with GAAL architectures, a novel type of Generative Adversarial Networks for unsupervised anomaly detection; third, we investigate performances of \({{\textrm{Latent}}Out}\) acting as a one-class classifier. The experiments show that all the variants of \({{\textrm{Latent}}Out}\) here introduced improve performances of the baseline methods to which they are applied, both in the unsupervised and in the semi-supervised settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://yann.lecun.com/exdb/mnist/.

  2. 2.

    https://github.com/zalandoresearch/fashion-mnist.

  3. 3.

    https://www.cs.toronto.edu/~kriz/cifar.html.

References

  1. Aggarwal, C.C.: Outlier Analysis. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-47578-3

    Book  MATH  Google Scholar 

  2. Akcay, S., Atapour-Abarghouei, A., Breckon, T.P.: GANomaly: semi-supervised anomaly detection via adversarial training. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11363, pp. 622–637. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20893-6_39

    Chapter  Google Scholar 

  3. An, J., Cho, S.: Variational autoencoder based anomaly detection using reconstruction probability. Technicl report 3, SNU Data Mining Center (2015)

    Google Scholar 

  4. Angiulli, F.: Concentration free outlier detection. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10534, pp. 3–19. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71249-9_1

    Chapter  Google Scholar 

  5. Angiulli, F.: CFOF: a concentration free measure for anomaly detection. ACM Trans. Knowl. Discov. Data (TKDD) 14(1), 4:1–4:53 (2020)

    Google Scholar 

  6. Angiulli, F., Basta, S., Pizzuti, C.: Distance-based detection and prediction of outliers. IEEE Trans. Knowl. Data Eng. 2(18), 145–160 (2006)

    Google Scholar 

  7. Angiulli, F., Fassetti, F.: DOLPHIN: an efficient algorithm for mining distance-based outliers in very large datasets. ACM Trans. Knowl. Disc. Data (TKDD) 3(1), Article 4 (2009)

    Google Scholar 

  8. Angiulli, F., Fassetti, F., Ferragina, L.: Improving deep unsupervised anomaly detection by exploiting VAE latent space distribution. In: Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S. (eds.) DS 2020. LNCS (LNAI), vol. 12323, pp. 596–611. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61527-7_39

    Chapter  Google Scholar 

  9. Angiulli, F., Fassetti, F., Ferragina, L.: Latent \(Out\): an unsupervised deep anomaly detection approach exploiting latent space distribution. Mach. Learn. (2022)

    Google Scholar 

  10. Angiulli, F., Pizzuti, C.: Fast outlier detection in large high-dimensional data sets. In: Proceedings International Conference on Principles of Data Mining and Knowledge Discovery (PKDD), pp. 15–26 (2002)

    Google Scholar 

  11. Angiulli, F., Pizzuti, C.: Outlier mining in large high-dimensional data sets. IEEE Trans. Knowl. Data Eng. 2(17), 203–215 (2005)

    Article  Google Scholar 

  12. Barnett, V., Lewis, T.: Outliers in Statistical Data. Wiley, Hoboken (1994)

    Google Scholar 

  13. Breunig, M.M., Kriegel, H., Ng, R., Sander, J.: LoF: identifying density-based local outliers. In: Proceedings of International Conference on Management of Data (SIGMOD) (2000)

    Google Scholar 

  14. Chalapathy, R., Chawla, S.: Deep learning for anomaly detection: a survey (2019)

    Google Scholar 

  15. Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv. 41(3) (2009)

    Google Scholar 

  16. Davies, L., Gather, U.: The identification of multiple outliers. J. Am. Stat. Assoc. 88, 782–792 (1993)

    Article  MathSciNet  Google Scholar 

  17. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  18. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  19. Hautamäki, V., Kärkkäinen, I., Fränti, P.: Outlier detection using k-nearest neighbour graph. In: International Conference on Pattern Recognition (ICPR), Cambridge, UK, 23–26 August, pp. 430–433 (2004)

    Google Scholar 

  20. Hawkins, S., He, H., Williams, G., Baxter, R.: Outlier detection using replicator neural networks. In: International Conference on Data Warehousing and Knowledge Discovery (DAWAK), pp. 170–180 (2002)

    Google Scholar 

  21. Hecht-Nielsen, R.: Replicator neural networks for universal optimal source coding. Science 269(5232), 1860–1863 (1995)

    Article  Google Scholar 

  22. Jin, W., Tung, A., Han, J.: Mining top-n local outliers in large databases. In: Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (2001)

    Google Scholar 

  23. Kawachi, Y., Koizumi, Y., Harada, N.: Complementary set variational autoencoder for supervised anomaly detection. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2366–2370 (2018)

    Google Scholar 

  24. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes (2013)

    Google Scholar 

  25. Knorr, E., Ng, R., Tucakov, V.: Distance-based outlier: algorithms and applications. VLDB J. 8(3–4), 237–253 (2000)

    Article  Google Scholar 

  26. Kramer, M.A.: Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 37(2), 233–243 (1991)

    Article  Google Scholar 

  27. Liu, Y., et al.: Generative adversarial active learning for unsupervised outlier detection. IEEE Trans. Knowl. Data Eng. 32(8), 1517–1528 (2020)

    Article  Google Scholar 

  28. Radovanović, M., Nanopoulos, A., Ivanović, M.: Reverse nearest neighbors in unsupervised distance-based outlier detection. IEEE Trans. Knowl. Data Eng. 27(5), 1369–1382 (2015)

    Article  Google Scholar 

  29. Schlegl, T., Seeböck, P., Waldstein, S., Langs, G., Schmidt-Erfurth, U.: f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 54 (2019)

    Google Scholar 

  30. Schlegl, T., Seeböck, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 146–157. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_12

    Chapter  Google Scholar 

  31. Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)

    Article  Google Scholar 

  32. Sun, J., Wang, X., Xiong, N., Shao, J.: Learning sparse representation with variational auto-encoder for anomaly detection. IEEE Access 6, 33353–33361 (2018)

    Article  Google Scholar 

  33. Tax, D.M.J., Duin, R.P.W.: Support vector data description. Mach. Learn. 54(1), 45–66 (2004)

    Article  Google Scholar 

  34. Zenati, H., Foo, C.S., Lecouat, B., Manek, G., Chandrasekhar, V.R.: Efficient GAN-based anomaly detection (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Ferragina .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Angiulli, F., Fassetti, F., Ferragina, L. (2022). Detecting Anomalies with \({{\textrm{Latent}}Out}\): Novel Scores, Architectures, and Settings. In: Ceci, M., Flesca, S., Masciari, E., Manco, G., Raś, Z.W. (eds) Foundations of Intelligent Systems. ISMIS 2022. Lecture Notes in Computer Science(), vol 13515. Springer, Cham. https://doi.org/10.1007/978-3-031-16564-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16564-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16563-4

  • Online ISBN: 978-3-031-16564-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics