Abstract
Artificial Neural Networks are inherently fault tolerant. Fault tolerance properties of artificial neural networks have been investigated with reference to the hardware model of artificial neural networks. Limited precision of neural hardware lead to study the sensitivity of feedforward layered networks for weight and input errors. In this paper, we analyze the sensitivity of feedforward layered network. We propose a framework for the investigation of fault tolerance properties of a hardware model of artificial neural networks. The result obtained indicates that networks obtained by training them with the resilient back propagation algorithm are not fault tolerant.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Dias, F.M., Antunes, A.: Fault Tolerance of Artificial Neural Networks: An Open Discussion for a Global Model. International Journal of Circuits, Systems and Signal Processing (2008)
Lippmann, R.P.: An Introduction to Computing with Neural Nets. IEEE ASSP Magazine, 4–22 (1987)
Dias, F.M., Antunes, A.: Fault Tolerance Improvement through architecture change in Artificial Neural Networks. In: Engineering Applications of Artificial Intelligence (2007)
Satyanarayana, S., Tsividis, Y.P., Graf, H.P.: A Reconfigurable VLSI Neural Network. IEEE Journal of Solid State Circuits 27(1) (1992)
Cavuslu, M.A., Karakuzu, C., Sahin, S.: Neural Network Hardware Implementation using FPGA. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4234. Springer, Heidelberg (2006)
Raeisi, R., Kabir, A.: Implementation of Artificial Neural Network on FPGA. In: American Society for Engineering Education, Illinois-Indiana and North Central Joint Section Conference (2006)
Sahin, S., Becerikli, Y., Yazici, S.: Neural Network Implementation in Hardware Using FPGAs. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4234, pp. 1105–1112. Springer, Heidelberg (2006)
Ortigosa, B.M., Canas, A., Ros, B., Ortigosa, P.M., Mota, S., Diaz, J.: Hardware description of multi-layer perceptrons with different abstraction levels. Microprocessors and Microsystems 30, 435–444 (2006)
Tchernev, B.B., Mulvaney, R.G., Phatak, D.S.: Investigating the Fault Tolerance of Neural Networks. Neural Computation 17(7), 1646–1664 (2005)
Chandra, P., Singh, Y.: Feedforward Sigmoidal Networks-Equicontinuity and Fault-Tolerance Properties. IEEE Transaction on Neural Networks 15(6) (2004)
Dias, F.M., Antunes, A., Mota, A.: Artificial Neural Networks: a Review of Commercial Hardware. Engineering Applications of Artificial Intelligence, IFAC 17(8), 945–952 (2004)
Chandra, P., Singh, Y.: Fault Tolerance of Feedforward Artificial Neural Networks - A Framework of Study. In: Proceedings of the International Joint Conference on Neural Networks, vol. 1, pp. 489–494 (2003)
Skubiszewski, M.: An Exact Hardware Implementation of the Boltzmann Machine. In: Proceedings of the fourth IEEE Symposium on Parallel and Distributed Processing, pp. 107–110 (1992)
Piuri, V.: Analysis of Fault Tolerance in Artificial Neural Networks. Journal of Parallel and Distributed Computing, 18–48 (2001)
Popescu, S.: Hardware Implementation of Fast Neural Networks using CPLD. In: Proceeding of the 5th IEEE Seminar on Neural network Applications in Electrical Engineering, pp. 121–124 (2000)
Edwards, P.J., Murray, A.F.: Fault Tolerance via Weight Noise in Analog VLSI Implementations of MLP’s – A Case study with EPSILON. IEEE Transaction on Circuits and Systems-II: Analog and Digital Signal Processing 45(9) (1998)
Hammadi, N.C., Ito, H.: A Learning Algorithm for Fault Tolerant Feedforward Neural Networks. IEICE Trans. Information and Systems E80-D(1), 21–27 (1997)
Phatak, D.S., Koren, I.: Complete and Partial Fault Tolerance of Feedforward Neural Nets. IEEE Transaction on Neural Networks 6(2), 446–456 (1995)
Alippi, C., Piuri, V., Sami, M.: Sensitivity to Errors in Artificial Neural Networks: A Behavioral Approach. IEEE Transaction on Circuits and Systems-I: Fundamental Theory and Applications 42(6) (1995)
Chiu, C.T., Mehrotra, K., Mohan, C.K., Ranka, S.: Training Techniques to obtain fault-tolerant neural network. In: 24th International Symposium on Fault-Tolerant Computing, pp. 360–369 (1994)
Bolt, G.: Investigating Fault Tolerance in Artificial Neural Networks. University of York, Department of Computer Science, Technical Report YCS 154, Heslington, York, England (1991)
Cherkassky, V.: Comparison of Adaptive methods for function estimation from samples. IEEE Transaction on Neural Networks 7(4) (1996)
Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks (ICNN), San Francisco, pp. 586–591 (1993)
Ferreira, P., Ribeiro, P., Antunes, A., Dias, F.M.: A high bit resolution FPGA implementation of a ANN with a new algorithm for the activation function. Neurocomputing 71(1-3), 71–73 (2007)
Piazza, F., Uncini, A., Zenobi, M.: Neural networks with digital LUT activation functions. In: Proceeding of IEEE Joint Conference on Neural Networks, vol. 2, pp. 1401–1404 (1993)
Zhou, Z.H., Chen, S.F., Chen, Z.Q.: Improving tolerance of neural networks against multi-node open fault. In: IEEE Joint conference on Neural Network, vol. 3, pp. 1687–1692 (2001)
Haykin, S.: Neural Networks: A Comprehensive Foundation. Macmillan College Publishing Company, New York (1999)
Jacobs, R.A.: Increase rates of convergence through learning rate adaptation. Neural Networks 1, 295–307 (1988)
Moller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4), 525–533 (1993)
Kincaid, D., Cheney, W.: Numerical Analysis: Mathematics of Scientific Computing. Thomson Learning (2001)
Wang, Y.F., Zeng, X.Q., Han, L.X.: Sensitivity of Madalines to input and weight perturbations. In: IEEE Proceeding of the Second International Conference on Machine Learning and Cybernetics, Xian (2003)
Takenaga, H., Abe, S., Takatoo, M., Kayama, M., Kitamura, T., Okuyama, Y.: Optimal Input Selection of Neural Networks by Sensitivity Analysis and its application to Image Recognition. In: IAPR workshop on Machine Vision Applications, Tokyo, pp. 117–120 (1990)
Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: ISCAS 1994, vol. 6, pp. 447–450 (1994)
Aggarwal, K.K., Singh, Y., Chandra, P., Puri, M.: Sensitivity analysis of fuzzy and neural network models. ACM SIGSOFT Software Engineering Notes 30(4) (2005)
Chu, L.C., Wah, B.W.: Fault tolerant neural networks with hybrid redundancy. In: IEEE Int. Joint Conference on Neural Networks, San Diego, CA, vol. 2, pp. 639–649 (1990)
Chiru, C.T., Mehrotra, K., Mohan, C.K., Ranka, S.: Training techniques to obtain fault-tolerant neural networks. In: International Symposium on Fault-Tolerant Computing, pp. 360–369 (1994)
Wei, N., Yang, S., Tong, S.: A modified learning algorithm for improving the fault tolerance of BP networks. In: IEEE International joint conference on Neural Networks, Washington, DC, vol. 1, pp. 247–252 (1996)
Yeung, S.S., Sun, X.: Using Function Approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Transaction on neural networks 13(1) (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Singh, A.P., Chandra, P., Rai, C.S. (2010). Sensitivity Measurement of Neural Hardware: A Simulation Based Study. In: Ranka, S., et al. Contemporary Computing. IC3 2010. Communications in Computer and Information Science, vol 95. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14825-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-14825-5_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-14824-8
Online ISBN: 978-3-642-14825-5
eBook Packages: Computer ScienceComputer Science (R0)