Skip to main content

A Low-Cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14181))

Included in the following conference series:

  • 1076 Accesses

Abstract

We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to \({\sim }96\%\) precision and \({\sim }98\%\) recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as \(0.3\%\) with respect to non-supervised inference time) and contributes to the explainability of the model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Balasubramaniam, A., Pasricha, S.: Object Detection in Autonomous Vehicles: Status and Open Challenges (2022)

    Google Scholar 

  2. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019)

    Article  Google Scholar 

  3. Chen, K., et al.: MMDetection: open MMLab detection toolbox and benchmark. arXiv:1906.07155 (2019)

  4. Chen, Z., Li, G., Pattabiraman, K.: A low-cost fault corrector for deep neural networks through range restriction. In: Proceedings - 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN (2021)

    Google Scholar 

  5. Cheng, C.H., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. In: Proceedings of the 2019 Design, Automation and Test in Europe Conference and Exhibition, DATE (2019)

    Google Scholar 

  6. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11) (2013)

    Google Scholar 

  7. Geissler, F., et al.: Towards a safety case for hardware fault tolerance in convolutional neural networks using activation range supervision. In: CEUR Workshop Proceedings, vol. 2916 (2021)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)

    Google Scholar 

  9. Graefe, R., Geissler, F., Syed, Q.: Pytorch application-level fault injector (pytorch-Alfi) (2022). https://github.com/IntelLabs/pytorchalfi

  10. Hastie, T., Tibshirani, R., Friedman, J.: Springer Series in Statistics, vol. 27 (2009)

    Google Scholar 

  11. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations, ICLR (2019)

    Google Scholar 

  12. Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. Front. Artif. Intell. Appl. 325 (2020)

    Google Scholar 

  13. Hoang, L.H., Hanif, M.A., Shafique, M.: FT-ClipAct: resilience analysis of deep neural networks and improving their fault tolerance using clipped activation. In: Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE (2020)

    Google Scholar 

  14. Hong, S., Frigo, P., Kaya, Y., Giuffrida, C., Dumitras, T.: Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks. In: Proceedings of the 28th USENIX Security Symposium (2019)

    Google Scholar 

  15. Huang, R., Feng, W., Fan, M., Wan, L., Sun, J.: Multiscale blur detection by learning discriminative deep features. Neurocomputing 285 (2018)

    Google Scholar 

  16. IEEE: 754–2019 - IEEE Standard for Floating-Point Arithmetic. Technical report (2019). https://doi.org/10.1109/IEEESTD.2019.8766229

  17. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE CVPR (2009). https://doi.org/10.1109/cvprw.2009.5206848

  18. Li, G., et al.: Understanding error propagation in Deep Learning Neural Network (DNN) accelerators and applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC (2017)

    Google Scholar 

  19. Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: Proceedings of International Conference on Learning and Representation (2017)

    Google Scholar 

  20. Microsoft: Coco 2017 dataset (2017). https://cocodataset.org/github.com/cocodataset/cocoapi

  21. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  22. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. (2011)

    Google Scholar 

  23. Qutub, S., et al.: Hardware faults that matter: understanding and estimating the safety impact of hardware faults on object detection DNNs. In: Safecomp (2022)

    Google Scholar 

  24. Schorn, C., Gauerhof, L.: FACER: a universal framework for detecting anomalous operation of deep neural networks. IEEE ITSC (2020)

    Google Scholar 

  25. Schorn, C., Guntoro, A., Ascheid, G.: Efficient on-line error detection and mitigation for deep neural network accelerators. In: Safecomp (2018)

    Google Scholar 

  26. Zhao, F., Zhang, C., Dong, N., You, Z., Wu, Z.: A uniform framework for anomaly detection in deep neural networks. Neural Process. Lett. 54(4) (2022)

    Google Scholar 

Download references

Acknowledgement

We thank Neslihan Kose Cihangir and Yang Peng for helpful discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123. This work was partially funded by the Federal Ministry for Economic Affairs and Climate Action of Germany, as part of the research project SafeWahr (Grant Number: 19A21026C), and the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Geissler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Geissler, F., Qutub, S., Paulitsch, M., Pattabiraman, K. (2023). A Low-Cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks. In: Guiochet, J., Tonetta, S., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2023. Lecture Notes in Computer Science, vol 14181. Springer, Cham. https://doi.org/10.1007/978-3-031-40923-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40923-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40922-6

  • Online ISBN: 978-3-031-40923-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics