Abstract
We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to \({\sim }96\%\) precision and \({\sim }98\%\) recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as \(0.3\%\) with respect to non-supervised inference time) and contributes to the explainability of the model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Balasubramaniam, A., Pasricha, S.: Object Detection in Autonomous Vehicles: Status and Open Challenges (2022)
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019)
Chen, K., et al.: MMDetection: open MMLab detection toolbox and benchmark. arXiv:1906.07155 (2019)
Chen, Z., Li, G., Pattabiraman, K.: A low-cost fault corrector for deep neural networks through range restriction. In: Proceedings - 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN (2021)
Cheng, C.H., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. In: Proceedings of the 2019 Design, Automation and Test in Europe Conference and Exhibition, DATE (2019)
Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11) (2013)
Geissler, F., et al.: Towards a safety case for hardware fault tolerance in convolutional neural networks using activation range supervision. In: CEUR Workshop Proceedings, vol. 2916 (2021)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)
Graefe, R., Geissler, F., Syed, Q.: Pytorch application-level fault injector (pytorch-Alfi) (2022). https://github.com/IntelLabs/pytorchalfi
Hastie, T., Tibshirani, R., Friedman, J.: Springer Series in Statistics, vol. 27 (2009)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations, ICLR (2019)
Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. Front. Artif. Intell. Appl. 325 (2020)
Hoang, L.H., Hanif, M.A., Shafique, M.: FT-ClipAct: resilience analysis of deep neural networks and improving their fault tolerance using clipped activation. In: Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE (2020)
Hong, S., Frigo, P., Kaya, Y., Giuffrida, C., Dumitras, T.: Terminal brain damage: exposing the graceless degradation in deep neural networks under hardware fault attacks. In: Proceedings of the 28th USENIX Security Symposium (2019)
Huang, R., Feng, W., Fan, M., Wan, L., Sun, J.: Multiscale blur detection by learning discriminative deep features. Neurocomputing 285 (2018)
IEEE: 754–2019 - IEEE Standard for Floating-Point Arithmetic. Technical report (2019). https://doi.org/10.1109/IEEESTD.2019.8766229
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE CVPR (2009). https://doi.org/10.1109/cvprw.2009.5206848
Li, G., et al.: Understanding error propagation in Deep Learning Neural Network (DNN) accelerators and applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC (2017)
Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: Proceedings of International Conference on Learning and Representation (2017)
Microsoft: Coco 2017 dataset (2017). https://cocodataset.org/github.com/cocodataset/cocoapi
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. (2011)
Qutub, S., et al.: Hardware faults that matter: understanding and estimating the safety impact of hardware faults on object detection DNNs. In: Safecomp (2022)
Schorn, C., Gauerhof, L.: FACER: a universal framework for detecting anomalous operation of deep neural networks. IEEE ITSC (2020)
Schorn, C., Guntoro, A., Ascheid, G.: Efficient on-line error detection and mitigation for deep neural network accelerators. In: Safecomp (2018)
Zhao, F., Zhang, C., Dong, N., You, Z., Wu, Z.: A uniform framework for anomaly detection in deep neural networks. Neural Process. Lett. 54(4) (2022)
Acknowledgement
We thank Neslihan Kose Cihangir and Yang Peng for helpful discussions. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123. This work was partially funded by the Federal Ministry for Economic Affairs and Climate Action of Germany, as part of the research project SafeWahr (Grant Number: 19A21026C), and the Natural Sciences and Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Geissler, F., Qutub, S., Paulitsch, M., Pattabiraman, K. (2023). A Low-Cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks. In: Guiochet, J., Tonetta, S., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2023. Lecture Notes in Computer Science, vol 14181. Springer, Cham. https://doi.org/10.1007/978-3-031-40923-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-40923-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40922-6
Online ISBN: 978-3-031-40923-3
eBook Packages: Computer ScienceComputer Science (R0)