Skip to main content

Label-Assisted Memory Autoencoder for Unsupervised Out-of-Distribution Detection

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12977))

Abstract

Out-of-Distribution (OoD) detectors based on AutoEncoder (AE) rely on an underlying assumption that an AE network cannot reconstruct OoD data as good as in-distribution (ID) data when it is constructed based on ID data only. However, this assumption may be violated in practice, resulting in a degradation in detection performance. Therefore, alleviating the factors violating this assumption can potentially improve the robustness of OoD performance. Our empirical studies also show that image complexity can be another factor hindering detection performance for AE-based detectors. To cater for these issues, we propose two OoD detectors LAMAE and LAMAE+. Both can be trained without the availability of any OoD-related data. The key idea is to regularize the AE network architecture with a classifier and a label-assisted memory to confine the reconstruction of OoD data while retaining the reconstruction ability for ID data. We also adjust the reconstruction error by taking image complexity into consideration. Experimental studies show that the proposed OoD detectors can perform well on a wider range of OoD scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdelzad, V., Czarnecki, K., Salay, R., Denounden, T., Vernekar, S., Phan, B.: Detecting out-of-distribution inputs in deep neural networks using an early-layer output. arXiv preprint arXiv:1910.10307 (2019)

  2. An, J., Cho, S.: Variational autoencoder based anomaly detection using reconstruction probability. Spec. Lect. IE 2(1), 1–18 (2015)

    Google Scholar 

  3. Andrews, J.T., Morton, E.J., Griffin, L.D.: Detecting anomalous data using auto-encoders. Int. J. Mach. Learn. Comput. 6(1), 21 (2016)

    Google Scholar 

  4. Berkhahn, F., Keys, R., Ouertani, W., Shetty, N., Geißler, D.: Augmenting variational autoencoders with sparse labels: A unified framework for unsupervised, semi-(un) supervised, and supervised learning. arXiv preprint arXiv:1908.03015 (2019)

  5. Bulatov, Y.: notMNIST dataset. http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html (2020)

  6. Chen, J., Sathe, S., Aggarwal, C., Turaga, D.: Outlier detection with autoencoder ensembles. In: SIAM International Conference on Data Mining, pp. 90–98. (2017)

    Google Scholar 

  7. Denouden, T., Salay, R., Czarnecki, K., Abdelzad, V., Phan, B., Vernekar, S.: Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance. arXiv preprint arXiv:1812.02765 (2018)

  8. Gao, P., Li, Z., Zhang, H.: Thermodynamics-based evaluation of various improved Shannon entropies for configurational information of gray-level images. Entropy 20(1), 19 (2018)

    Article  MathSciNet  Google Scholar 

  9. Gong, D., et al.: Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection. In: IEEE/CVF International Conference on Computer Vision, pp. 1705–1714 (2019)

    Google Scholar 

  10. Hendrycks, D., Mazeika, M., Dietterich, T.: Deep anomaly detection with outlier exposure. In: International Conference on Learning Representations (2018)

    Google Scholar 

  11. Hsu, Y.C., Shen, Y., Jin, H., Kira, Z.: Generalized ODIN: detecting out-of-distribution image without learning from out-of-distribution data. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10951–10960 (2020)

    Google Scholar 

  12. Huang, Y., Dai, S., Nguyen, T., Baraniuk, R.G., Anandkumar, A.: Out-of-distribution detection using neural rendering generative models. arXiv preprint arXiv:1907.04572 (2019)

  13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)

    Google Scholar 

  14. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: International Conference on Learning Representations (2014)

    Google Scholar 

  15. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, Toronto (2009)

    Google Scholar 

  16. LeCun, Y.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/ (1998)

  17. Lee, K., Lee, H., Lee, K., Shin, J.: Training confidence-calibrated classifiers for detecting out-of-distribution samples. In: International Conference on Learning Representations (2018)

    Google Scholar 

  18. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Advances in Neural Information Processing Systems, pp. 7167–7177 (2018)

    Google Scholar 

  19. Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  20. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)

    Google Scholar 

  21. Masana, M., Ruiz, I., Serrat, J., van de Weijer, J., Lopez, A.M.: Metric learning for novelty and anomaly detection. In: British Machine Vision Conference 64 (2018)

    Google Scholar 

  22. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)

    Google Scholar 

  23. Nalisnick, E., Matsukawa, A., Teh, Y.W., Gorur, D., Lakshminarayanan, B.: Do deep generative models know what they don’t know? In: International Conference on Machine Learning (2019)

    Google Scholar 

  24. Perera, P., Patel, V.M.: Learning deep features for one-class classification. IEEE Trans. Image Process. 28(11), 5450–5463 (2019)

    Article  MathSciNet  Google Scholar 

  25. Ren, J., et al.: Likelihood ratios for out-of-distribution detection. In: Advances in Neural Information Processing Systems, pp. 14680–14691 (2019)

    Google Scholar 

  26. Ruff, L., et al.: Deep one-class classification. In: International Conference on Machine Learning, pp. 4393–4402 (2018)

    Google Scholar 

  27. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Technical report La Jolla Inst for Cognitive Science (1985)

    Google Scholar 

  28. Sarafijanovic-Djukic, N., Davis, J.: Fast distance-based anomaly detection in images using an inception-like autoencoder. In: International Conference on Discovery Science, pp. 493–508 (2019)

    Google Scholar 

  29. Shafaei, A., Schmidt, M., Little, J.: A less biased evaluation of OOD sample detectors. In: British Machine Vision Conference (2019)

    Google Scholar 

  30. Shalev, G., Adi, Y., Keshet, J.: Out-of-distribution detection using multiple semantic label representations. In: Advances in Neural Information Processing Systems, pp. 7375–7385 (2018)

    Google Scholar 

  31. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948)

    Article  MathSciNet  Google Scholar 

  32. Tsai, D.Y., Lee, Y., Matsuyama, E.: Information entropy measure for evaluation of image quality. J. Digit. Imaging 21(3), 338–347 (2008)

    Article  Google Scholar 

  33. Tuluptceva, N., Bakker, B., Fedulova, I., Schulz, H., Dylov, D.V.: Anomaly detection with deep perceptual autoencoders. arXiv preprint arXiv:2006.13265 (2020)

  34. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  35. Xu, D., Ricci, E., Yan, Y., Song, J., Sebe, N.: Learning deep representations of appearance and motion for anomalous event detection. In: British Machine Vision Conference 8 (2015)

    Google Scholar 

  36. Yu, Q., Aizawa, K.: Unsupervised out-of-distribution detection by maximum classifier discrepancy. In: IEEE/CVF International Conference on Computer Vision, pp. 9518–9526. (2019)

    Google Scholar 

  37. Yuan, Y., Wang, D., Wang, Q.: Anomaly detection in traffic scenes via spatial-aware motion reconstruction. IEEE Trans. Intell. Transp. Syst. 18(5), 1198–1209 (2016)

    Article  Google Scholar 

  38. Zhao, Y., Deng, B., Shen, C., Liu, Y., Lu, H., Hua, X.S.: Spatio-temporal autoencoder for video anomaly detection. In: ACM International Conference on Multimedia, pp. 1933–1941 (2017)

    Google Scholar 

  39. Zhou, C., Paffenroth, R.C.: Anomaly detection with robust deep autoencoders. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674 (2017)

    Google Scholar 

  40. Zong, B., et al.: Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. In: International Conference on Learning Representations (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by National Natural Science Foundation of China (Grant No. 62002148), the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531), Shenzhen Fundamental Research Program (Grant No. JCYJ20190809121403553), Research Institute of Trustworthy Autonomous Systems, and Huawei.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Yao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, S. et al. (2021). Label-Assisted Memory Autoencoder for Unsupervised Out-of-Distribution Detection. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12977. Springer, Cham. https://doi.org/10.1007/978-3-030-86523-8_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86523-8_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86522-1

  • Online ISBN: 978-3-030-86523-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics