Abstract
The use of deep neural network accelerators in safety-critical systems, for example autonomous vehicles, requires measures to ensure functional safety of the embedded hardware. However, due to the vast computational requirements that deep neural networks exhibit, the use of traditional redundancy-based approaches for the detection and mitigation of random hardware errors leads to very inefficient systems. In this paper we present an efficient and effective method to detect critical bit-flip errors in neural network accelerators and mitigate their effect at run time. Our method is based on an anomaly detection in the intermediate outputs of the neural network. We evaluate our method by performing fault injection simulations with two deep neural networks and data sets. In these experiments our error detector achieves a recall of up to 99.03% and a precision of up to 97.29%, while requiring a computation overhead of only 2.67% or less.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
\(\text {1 failure-in-time (FIT)} = \text {one failure in one billion device operating hours}\).
- 2.
A basic introduction to neural network training concepts can be found e.g. in [3].
References
Aitken, R., Fey, G., Kalbarczyk, Z.T., Reichenbach, F., Sonza Reorda, M.: Reliability analysis reloaded: how will we survive? In: Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 358–367 (2013)
Bernieri, A., Betta, G., Liguori, C.: On-line fault detection and diagnosis obtained by implementing neural algorithms on a digital signal processor. IEEE Trans. Instrum. Meas. 45(5), 894–899 (1996)
Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, New York (2006)
Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1
Canziani, A., Paszke, A., Culurciello, E.: An Analysis of Deep Neural Network Models for Practical Applications. CoRR abs/1605.07678 (2016)
Chen, Y.H., Emer, J., Sze, V.: Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. In: Proceedings of the 43rd International Symposium on Computer Architecture, ISCA 2016, pp. 367–379 (2016)
Chen, Y.H., Emer, J., Sze, V.: Using dataflow to optimize energy efficiency of deep neural network accelerators. IEEE Micro 37(3), 12–21 (2017)
Du, Z., et al.: ShiDianNao: shifting vision processing closer to the sensor. In: Proceedings of the 42nd Annual International Symposium on Computer Architecture, ISCA 2015, pp. 92–104 (2015)
Gu, J., et al.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)
Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 1737–1746 (2015)
Gysel, P.: Ristretto: hardware-oriented approximation of convolutional neural networks. Master Thesis, University of California, Davis (2016)
Hashemi, S., Anthony, N., Tann, H., Bahar, R.I., Reda, S.: Understanding the impact of precision quantization on the accuracy and energy of neural networks. In: Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 1474–1479 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034 (2015)
Henkel, J., et al.: Reliable on-chip systems in the nano-era. In: 50th Annual Design Automation Conference, pp. 695–704 (2013)
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: Advances in Neural Information Processing Systems (2016)
Ibe, E., Taniguchi, H., Yahagi, Y., Shimbo, K.I., Toba, T.: Impact of scaling on neutron-induced soft error in SRAMs from a 250 nm to a 22 nm design rule. IEEE Trans. Electron Devices 57(7), 1527–1538 (2010)
ISO: Road vehicles—Functional safety (2011)
Krizhevsky, A.: Learning multiple layers of features from tiny images. Master Thesis, University of Toronto (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
LeCun, Y., et al.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)
Li, G., et al.: Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2017)
Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: International Conference on Learning Representations (ICLR) (2017)
Mnih, V., et al.: Playing Atari with Deep Reinforcement Learning. CoRR abs/1312.5602 (2013)
Peemen, M., Setio, A.A.A., Mesman, B., Corporaal, H.: Memory-centric accelerator design for convolutional neural networks. In: IEEE 31st International Conference on Computer Design (ICCD), pp. 13–19 (2013)
Schorn, C., Guntoro, A., Ascheid, G.: Accurate neuron resilience prediction for a flexible reliability management in neural network accelerators. In: Design, Automation and Test in Europe Conference and Exhibition (DATE) (2018)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. CoRR abs/1412.6806 (2014)
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw.: Off. J. Int. Neural Netw. Soc. 32, 323–332 (2012)
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) (2014)
Vogel, S., Schorn, C., Guntoro, A., Ascheid, G.: Efficient Stochastic Inference of Bitwise Deep Neural Networks. CoRR abs/1611.06539 (2016)
Zeiler, M.D., et al.: On rectified linear units for speech processing. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3517–3521 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Schorn, C., Guntoro, A., Ascheid, G. (2018). Efficient On-Line Error Detection and Mitigation for Deep Neural Network Accelerators. In: Gallina, B., Skavhaug, A., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2018. Lecture Notes in Computer Science(), vol 11093. Springer, Cham. https://doi.org/10.1007/978-3-319-99130-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-99130-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99129-0
Online ISBN: 978-3-319-99130-6
eBook Packages: Computer ScienceComputer Science (R0)