Skip to main content

Establishing Safety Criteria for Artificial Neural Networks

  • Conference paper
Knowledge-Based Intelligent Information and Engineering Systems (KES 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2773))

Abstract

Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. This paper defines the safety criteria which if enforced, would contribute to justifying the safety of neural networks. The criteria are a set of safety requirements for the behaviour of neural networks. The paper also highlights the challenge of maintaining performance in terms of adaptability and generalisation whilst providing acceptable safety arguments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lisboa, P.: Industrial use of safety-related artificial neural networks. Health & Safety Executive 327 (2001)

    Google Scholar 

  2. Sharkey, A.J.C., Sharkey, N.E.: Combining Diverse Neural Nets, in Computer Science. University of Sheffield, Sheffield (1997)

    Google Scholar 

  3. Nabney, I., et al.: Practical Assessment of Neural Network Applications. Aston University & Lloyd’s Register, UK (2000)

    Google Scholar 

  4. Rodvold, D.M.: A Software Development Process Model for Artificial Neural Networks in Critical Applications. In: Proceedings of the 1999 International Conference on Neural Networks (IJCNN 1999), Washington D.C. (1999)

    Google Scholar 

  5. Wen, W., Callahan, J., Napolitano, M.: Towards Developing Verifiable Neural Network Controller. Department of Aerospace Engineering, NASA/WVU Software Research Laboratory, West Virginia University, Morgantown (1996)

    Google Scholar 

  6. Leveson, N.: Safeware: system safety and computers. Addison-Wesley, Reading (1995)

    Google Scholar 

  7. Villemeur, A.: Reliability, Availability, Maintainability and Safety Assessment, vol. 1. John Wiley & Sons, Chichester (1992)

    Google Scholar 

  8. MoD, Defence Standard 00-55: Requirements for Safety Related Software in Defence Equipment. UK Ministry of Defence (1996)

    Google Scholar 

  9. SAE, ARP 4761 - Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment. The Society for Automotive Engineers (1996)

    Google Scholar 

  10. MoD, Interim Defence Standard 00-58 Issue 1: HAZOP Studies on Systems Containing Programmable Electronics, UK Ministry of Defence (1996)

    Google Scholar 

  11. Kelly, T.P.: Arguing Safety – A Systematic Approach to Managing Safety Cases. Department of Computer Science, University of York, York (1998)

    Google Scholar 

  12. Kearns, M.: A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split. AT&T Bell Laboratories

    Google Scholar 

  13. Andrews, R., Diederich, J., Tickle, A.: A survey and critique of techniques for extracting rules from trained artificial neural networks. Neurocomputing Research Centre, Queensland University of Technology (1995)

    Google Scholar 

  14. Kilimasaukas, C.C.: Neural nets tell why. Dr Dobbs’s pp. 16–24 (April 1991)

    Google Scholar 

  15. Venema, R.S.: Aspects of an Integrated Neural Prediction System, Rijksuniversiteit, Groningen, Netherlands (1999)

    Google Scholar 

  16. Weaver, R.A., McDermid, J.A., Kelly, T.P.: Software Safety Arguments: Towards a Systematic Categorisation of Evidence. In: International System Safety Conference, Denver, CO (2002)

    Google Scholar 

  17. Dorffner, G., Wiklicky, H., Prem, E.: Formal neural network specification and its implications on standardisation. Computer Standards and Interfaces 16, 205–219 (1994)

    Article  Google Scholar 

  18. Osherson, D.N., Weinstein, S., Stoli, M.: Modular Learning. Computational Neuroscience, Cambridge, MA, pp. 369–377 (1990)

    Google Scholar 

  19. Zwaag, B.J., Slump, K.: Process Identification Through Modular Neural Networks and Rule Extraction. In: 5th International FLINS Conference. World Scientific, Gent (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kurd, Z., Kelly, T. (2003). Establishing Safety Criteria for Artificial Neural Networks. In: Palade, V., Howlett, R.J., Jain, L. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Lecture Notes in Computer Science(), vol 2773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45224-9_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45224-9_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40803-1

  • Online ISBN: 978-3-540-45224-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics