Skip to main content

Increasing Safety of Neural Networks in Medical Devices

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2019)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11699))

Included in the following conference series:

Abstract

Neural networks are now widely used in industry for applications such as data analysis and pattern recognition. In the medical devices domain neural networks are used to detect certain medical/decease indications. For example, a potential imminent asthma insult is detected based e.g. on breathing pattern, heart rate, and a few optional additional parameters. The patient receives a warning message and can either change his behavior and/or take some medicine in order to avoid the insult. This directly increases the patient’s quality of life. Although, currently medical devices mostly use neural networks to provide some guidance information or to propose some treatment or change of settings, safety and reliability of the neural network are paramount. Internal errors or influences from the environment can cause wrong inferences. This paper will describe the experiences we made and the ways we used in order to both increase safety and reliability of a neural network in a medical device. We use a combination of online and offline tests to detect undesired behavior. Online tests are performed in regular intervals during therapy and offline tests are performed when the device is not performing therapy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chervyakov, N.I., et al.: The architecture of a fault-tolerant modular neurocomputer based on modular number projections. Neurocomputing 272, 96–107 (2017)

    Article  Google Scholar 

  2. Becker, U.: STPA guided systems engineering. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 164–176. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_15

    Chapter  Google Scholar 

  3. Wilkie, A.J.: Schanuel’s conjecture and the decidability of the real exponential field. In: Hart, B.T., Lachlan, A.H., Valeriote, M.A. (eds.) Algebraic Model Theory, pp. 223–230. Springer, Dordrecht (1997). https://doi.org/10.1007/978-94-015-8923-9_11

    Chapter  Google Scholar 

  4. Sum, J., Leung, C.-s., Ho, K.: Prediction error of a fault tolerant neural network. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4232, pp. 521–528. Springer, Heidelberg (2006). https://doi.org/10.1007/11893028_58

    Chapter  Google Scholar 

  5. Qin, M., Sun, C., Vucinic, D.: Robustness of Neural Networks against Storage Media Errors, arXiv:1709.06173v1, September 2017

  6. Invanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers, CoRR abs/1811.01828v1 (2018)

    Google Scholar 

  7. Schorn, C., Guntoro, A., Ascheid, G.: Efficient on-line error detection and mitigation for deep neural network accelerators. In: Gallina, B., Skavhaug, A., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11093, pp. 205–219. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99130-6_14

    Chapter  Google Scholar 

  8. Li, G., et al.: Understanding error propagation in Deep Learning Neural Network (DNN) accelerators and applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2017)

    Google Scholar 

  9. Sze, V., Chen, Y.-H., Yang, T.-J., Emer, J.: Efficient Processing of Deep Neural Networks: A Tutorial and Survey, August 2017. arXiv:1703.09039v2

  10. Chen, Y., Krishna, T., Emer, J., Sze, V.: DaDianNao: a machine learning supercomputer. In: Micro (2014)

    Google Scholar 

  11. Han, S., Mao, H., Dally, W.J.; Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: ICLR (2016)

    Google Scholar 

  12. Alberico, J., Judd, P., Hetherington, T., Aamodt, T., Jerger, N.E., Moshovos, A.: Cnvlutin: ineffectual-neuron-free deep neural network computing. In: ISCA (2016)

    Google Scholar 

  13. Reagen, B., et al.: Minerva: enabling low-power, highly-accurate deep neural network accelerators. In: ISCA 2016 (2016)

    Google Scholar 

  14. Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: NIPS 2015 (2015)

    Google Scholar 

  15. Dorrance, R., Ren, F., MarkovĂ­c: A scalable sparse matrix-vector multiplication kernel for energy-efficient sparse-blas on FPGAs. In: ISFPGA (2014)

    Google Scholar 

  16. Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: 30th Conference on Neural Information Processing (NIPS 2016), Barcelona (2016)

    Google Scholar 

  17. Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep neural networks. ACM J. Emerg. Technol. Comput. Syst. 13(3), 32 (2017)

    Article  Google Scholar 

  18. Chiu, C.T., et al.: Modifying training algorithms for improved fault tolerance. In: ICNN 1994, vol. I, pp. 333–338 (1994)

    Google Scholar 

  19. Dutta, S., Jha, S., et al.: Learning and Verification of Feedback Control Systems using Feedforward Control Systems using Feedforward Neural Networks, Boulder (2017)

    Google Scholar 

  20. Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. In: Conference on Neural Information Processing (NIPS 2015)

    Google Scholar 

  21. Han, S.: Efficient Methods and Hardware for Deep Learning, Stanford University Dissertation, September 2017

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uwe Becker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Becker, U. (2019). Increasing Safety of Neural Networks in Medical Devices. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2019. Lecture Notes in Computer Science(), vol 11699. Springer, Cham. https://doi.org/10.1007/978-3-030-26250-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26250-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26249-5

  • Online ISBN: 978-3-030-26250-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics