Skip to main content

Sensitivity Measurement of Neural Hardware: A Simulation Based Study

  • Conference paper
  • 725 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 95))

Abstract

Artificial Neural Networks are inherently fault tolerant. Fault tolerance properties of artificial neural networks have been investigated with reference to the hardware model of artificial neural networks. Limited precision of neural hardware lead to study the sensitivity of feedforward layered networks for weight and input errors. In this paper, we analyze the sensitivity of feedforward layered network. We propose a framework for the investigation of fault tolerance properties of a hardware model of artificial neural networks. The result obtained indicates that networks obtained by training them with the resilient back propagation algorithm are not fault tolerant.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dias, F.M., Antunes, A.: Fault Tolerance of Artificial Neural Networks: An Open Discussion for a Global Model. International Journal of Circuits, Systems and Signal Processing (2008)

    Google Scholar 

  2. Lippmann, R.P.: An Introduction to Computing with Neural Nets. IEEE ASSP Magazine, 4–22 (1987)

    Google Scholar 

  3. Dias, F.M., Antunes, A.: Fault Tolerance Improvement through architecture change in Artificial Neural Networks. In: Engineering Applications of Artificial Intelligence (2007)

    Google Scholar 

  4. Satyanarayana, S., Tsividis, Y.P., Graf, H.P.: A Reconfigurable VLSI Neural Network. IEEE Journal of Solid State Circuits 27(1) (1992)

    Google Scholar 

  5. Cavuslu, M.A., Karakuzu, C., Sahin, S.: Neural Network Hardware Implementation using FPGA. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4234. Springer, Heidelberg (2006)

    Google Scholar 

  6. Raeisi, R., Kabir, A.: Implementation of Artificial Neural Network on FPGA. In: American Society for Engineering Education, Illinois-Indiana and North Central Joint Section Conference (2006)

    Google Scholar 

  7. Sahin, S., Becerikli, Y., Yazici, S.: Neural Network Implementation in Hardware Using FPGAs. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4234, pp. 1105–1112. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  8. Ortigosa, B.M., Canas, A., Ros, B., Ortigosa, P.M., Mota, S., Diaz, J.: Hardware description of multi-layer perceptrons with different abstraction levels. Microprocessors and Microsystems 30, 435–444 (2006)

    Article  Google Scholar 

  9. Tchernev, B.B., Mulvaney, R.G., Phatak, D.S.: Investigating the Fault Tolerance of Neural Networks. Neural Computation 17(7), 1646–1664 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  10. Chandra, P., Singh, Y.: Feedforward Sigmoidal Networks-Equicontinuity and Fault-Tolerance Properties. IEEE Transaction on Neural Networks 15(6) (2004)

    Google Scholar 

  11. Dias, F.M., Antunes, A., Mota, A.: Artificial Neural Networks: a Review of Commercial Hardware. Engineering Applications of Artificial Intelligence, IFAC 17(8), 945–952 (2004)

    Google Scholar 

  12. Chandra, P., Singh, Y.: Fault Tolerance of Feedforward Artificial Neural Networks - A Framework of Study. In: Proceedings of the International Joint Conference on Neural Networks, vol. 1, pp. 489–494 (2003)

    Google Scholar 

  13. Skubiszewski, M.: An Exact Hardware Implementation of the Boltzmann Machine. In: Proceedings of the fourth IEEE Symposium on Parallel and Distributed Processing, pp. 107–110 (1992)

    Google Scholar 

  14. Piuri, V.: Analysis of Fault Tolerance in Artificial Neural Networks. Journal of Parallel and Distributed Computing, 18–48 (2001)

    Google Scholar 

  15. Popescu, S.: Hardware Implementation of Fast Neural Networks using CPLD. In: Proceeding of the 5th IEEE Seminar on Neural network Applications in Electrical Engineering, pp. 121–124 (2000)

    Google Scholar 

  16. Edwards, P.J., Murray, A.F.: Fault Tolerance via Weight Noise in Analog VLSI Implementations of MLP’s – A Case study with EPSILON. IEEE Transaction on Circuits and Systems-II: Analog and Digital Signal Processing 45(9) (1998)

    Google Scholar 

  17. Hammadi, N.C., Ito, H.: A Learning Algorithm for Fault Tolerant Feedforward Neural Networks. IEICE Trans. Information and Systems E80-D(1), 21–27 (1997)

    Google Scholar 

  18. Phatak, D.S., Koren, I.: Complete and Partial Fault Tolerance of Feedforward Neural Nets. IEEE Transaction on Neural Networks 6(2), 446–456 (1995)

    Article  Google Scholar 

  19. Alippi, C., Piuri, V., Sami, M.: Sensitivity to Errors in Artificial Neural Networks: A Behavioral Approach. IEEE Transaction on Circuits and Systems-I: Fundamental Theory and Applications 42(6) (1995)

    Google Scholar 

  20. Chiu, C.T., Mehrotra, K., Mohan, C.K., Ranka, S.: Training Techniques to obtain fault-tolerant neural network. In: 24th International Symposium on Fault-Tolerant Computing, pp. 360–369 (1994)

    Google Scholar 

  21. Bolt, G.: Investigating Fault Tolerance in Artificial Neural Networks. University of York, Department of Computer Science, Technical Report YCS 154, Heslington, York, England (1991)

    Google Scholar 

  22. Cherkassky, V.: Comparison of Adaptive methods for function estimation from samples. IEEE Transaction on Neural Networks 7(4) (1996)

    Google Scholar 

  23. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks (ICNN), San Francisco, pp. 586–591 (1993)

    Google Scholar 

  24. Ferreira, P., Ribeiro, P., Antunes, A., Dias, F.M.: A high bit resolution FPGA implementation of a ANN with a new algorithm for the activation function. Neurocomputing 71(1-3), 71–73 (2007)

    Article  Google Scholar 

  25. Piazza, F., Uncini, A., Zenobi, M.: Neural networks with digital LUT activation functions. In: Proceeding of IEEE Joint Conference on Neural Networks, vol. 2, pp. 1401–1404 (1993)

    Google Scholar 

  26. Zhou, Z.H., Chen, S.F., Chen, Z.Q.: Improving tolerance of neural networks against multi-node open fault. In: IEEE Joint conference on Neural Network, vol. 3, pp. 1687–1692 (2001)

    Google Scholar 

  27. Haykin, S.: Neural Networks: A Comprehensive Foundation. Macmillan College Publishing Company, New York (1999)

    MATH  Google Scholar 

  28. Jacobs, R.A.: Increase rates of convergence through learning rate adaptation. Neural Networks 1, 295–307 (1988)

    Article  Google Scholar 

  29. Moller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4), 525–533 (1993)

    Article  Google Scholar 

  30. Kincaid, D., Cheney, W.: Numerical Analysis: Mathematics of Scientific Computing. Thomson Learning (2001)

    Google Scholar 

  31. Wang, Y.F., Zeng, X.Q., Han, L.X.: Sensitivity of Madalines to input and weight perturbations. In: IEEE Proceeding of the Second International Conference on Machine Learning and Cybernetics, Xian (2003)

    Google Scholar 

  32. Takenaga, H., Abe, S., Takatoo, M., Kayama, M., Kitamura, T., Okuyama, Y.: Optimal Input Selection of Neural Networks by Sensitivity Analysis and its application to Image Recognition. In: IAPR workshop on Machine Vision Applications, Tokyo, pp. 117–120 (1990)

    Google Scholar 

  33. Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: ISCAS 1994, vol. 6, pp. 447–450 (1994)

    Google Scholar 

  34. Aggarwal, K.K., Singh, Y., Chandra, P., Puri, M.: Sensitivity analysis of fuzzy and neural network models. ACM SIGSOFT Software Engineering Notes 30(4) (2005)

    Google Scholar 

  35. Chu, L.C., Wah, B.W.: Fault tolerant neural networks with hybrid redundancy. In: IEEE Int. Joint Conference on Neural Networks, San Diego, CA, vol. 2, pp. 639–649 (1990)

    Google Scholar 

  36. Chiru, C.T., Mehrotra, K., Mohan, C.K., Ranka, S.: Training techniques to obtain fault-tolerant neural networks. In: International Symposium on Fault-Tolerant Computing, pp. 360–369 (1994)

    Google Scholar 

  37. Wei, N., Yang, S., Tong, S.: A modified learning algorithm for improving the fault tolerance of BP networks. In: IEEE International joint conference on Neural Networks, Washington, DC, vol. 1, pp. 247–252 (1996)

    Google Scholar 

  38. Yeung, S.S., Sun, X.: Using Function Approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Transaction on neural networks 13(1) (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Singh, A.P., Chandra, P., Rai, C.S. (2010). Sensitivity Measurement of Neural Hardware: A Simulation Based Study. In: Ranka, S., et al. Contemporary Computing. IC3 2010. Communications in Computer and Information Science, vol 95. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14825-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14825-5_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14824-8

  • Online ISBN: 978-3-642-14825-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics