Abstract
Neural networks are now widely used in industry for applications such as data analysis and pattern recognition. In the medical devices domain neural networks are used to detect certain medical/decease indications. For example, a potential imminent asthma insult is detected based e.g. on breathing pattern, heart rate, and a few optional additional parameters. The patient receives a warning message and can either change his behavior and/or take some medicine in order to avoid the insult. This directly increases the patient’s quality of life. Although, currently medical devices mostly use neural networks to provide some guidance information or to propose some treatment or change of settings, safety and reliability of the neural network are paramount. Internal errors or influences from the environment can cause wrong inferences. This paper will describe the experiences we made and the ways we used in order to both increase safety and reliability of a neural network in a medical device. We use a combination of online and offline tests to detect undesired behavior. Online tests are performed in regular intervals during therapy and offline tests are performed when the device is not performing therapy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chervyakov, N.I., et al.: The architecture of a fault-tolerant modular neurocomputer based on modular number projections. Neurocomputing 272, 96–107 (2017)
Becker, U.: STPA guided systems engineering. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 164–176. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_15
Wilkie, A.J.: Schanuel’s conjecture and the decidability of the real exponential field. In: Hart, B.T., Lachlan, A.H., Valeriote, M.A. (eds.) Algebraic Model Theory, pp. 223–230. Springer, Dordrecht (1997). https://doi.org/10.1007/978-94-015-8923-9_11
Sum, J., Leung, C.-s., Ho, K.: Prediction error of a fault tolerant neural network. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4232, pp. 521–528. Springer, Heidelberg (2006). https://doi.org/10.1007/11893028_58
Qin, M., Sun, C., Vucinic, D.: Robustness of Neural Networks against Storage Media Errors, arXiv:1709.06173v1, September 2017
Invanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers, CoRR abs/1811.01828v1 (2018)
Schorn, C., Guntoro, A., Ascheid, G.: Efficient on-line error detection and mitigation for deep neural network accelerators. In: Gallina, B., Skavhaug, A., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11093, pp. 205–219. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99130-6_14
Li, G., et al.: Understanding error propagation in Deep Learning Neural Network (DNN) accelerators and applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2017)
Sze, V., Chen, Y.-H., Yang, T.-J., Emer, J.: Efficient Processing of Deep Neural Networks: A Tutorial and Survey, August 2017. arXiv:1703.09039v2
Chen, Y., Krishna, T., Emer, J., Sze, V.: DaDianNao: a machine learning supercomputer. In: Micro (2014)
Han, S., Mao, H., Dally, W.J.; Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: ICLR (2016)
Alberico, J., Judd, P., Hetherington, T., Aamodt, T., Jerger, N.E., Moshovos, A.: Cnvlutin: ineffectual-neuron-free deep neural network computing. In: ISCA (2016)
Reagen, B., et al.: Minerva: enabling low-power, highly-accurate deep neural network accelerators. In: ISCA 2016 (2016)
Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: NIPS 2015 (2015)
Dorrance, R., Ren, F., MarkovĂc: A scalable sparse matrix-vector multiplication kernel for energy-efficient sparse-blas on FPGAs. In: ISFPGA (2014)
Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: 30th Conference on Neural Information Processing (NIPS 2016), Barcelona (2016)
Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep neural networks. ACM J. Emerg. Technol. Comput. Syst. 13(3), 32 (2017)
Chiu, C.T., et al.: Modifying training algorithms for improved fault tolerance. In: ICNN 1994, vol. I, pp. 333–338 (1994)
Dutta, S., Jha, S., et al.: Learning and Verification of Feedback Control Systems using Feedforward Control Systems using Feedforward Neural Networks, Boulder (2017)
Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. In: Conference on Neural Information Processing (NIPS 2015)
Han, S.: Efficient Methods and Hardware for Deep Learning, Stanford University Dissertation, September 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Becker, U. (2019). Increasing Safety of Neural Networks in Medical Devices. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2019. Lecture Notes in Computer Science(), vol 11699. Springer, Cham. https://doi.org/10.1007/978-3-030-26250-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-26250-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26249-5
Online ISBN: 978-3-030-26250-1
eBook Packages: Computer ScienceComputer Science (R0)