Abstract
A method to implement a digital multilayer perceptron (DMLP) in an FPGA is proposed, where the DMLP is tolerant to simultaneous weight and neuron faults. It has been shown in [1] that a multilayer perceptron (MLP) which has successfully trained using the deep learning method is tolerant to multiple weight and neuron faults where the weight faults are between the hidden and output layers, and the neuron faults are in the hidden layer. Using this fact, a set of weights in the trained MLP is installed in an FPGA to cope with these faults. Further, the neuron faults in the output layer are detected or corrected using SECDED code. The above process is done as follows. The generator developed by us automatically outputs a VHDL source file which describes the perceptron using a set of weight values in the MLP trained by the deep learning method. The VHDL file obtained is input to the logic design software Quartus II of Altera Inc., and then, implemented in an FPGA. The process is applied to realizing fault-tolerant DMLPs for character recognitions as concrete examples. Then, the perceptrons to be made fault-tolerant and corresponding non-redundant ones not to be made fault-tolerant are compared in terms of not only reliability and fault rate but also hardware size, computing speed and electricity consumption. The data show that the fault rate of the fault-tolerant perceptron can be significantly decreased than that of the corresponding non-redundant one.
This paper is the full version of [2].
Itsuo Takanami—Retired.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. Syst. E91-D(4), 1168–1175 (2008)
Horita, T., Takanami, I.: An FPGA-based multiple-weight-and-neuron-fault tolerant digital multilayer perceptron. Neurocomputing 99, 570–574 (2013). Elsevier
Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)
Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by Fahlman, S.E., et al. at the Computer Science Department, Carnegie Mellon University
Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proceedings of International Symposium on FTCS, pp. 228–235 (1990)
Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993). (in Japanese)
Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvement. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)
Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf. Syst. E81-D(1), 115–123 (1998)
Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proceedings of International Joint Conference on Neural Networks, pp. 2656–2660 (2001)
Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. Syst. E84-D(7), 899–905 (2001)
Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proceedings of International Joint Conference on Neural Networks, pp. I-769–I-774 (1992)
Cavalieri, S., Mirabella, O.: A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw. (Pergamon) 12(1), 91–106 (1999)
Demidenko, S., Piuri, V.: Concurrent diagnosis in digital implementations of neural networks. Neurocomputing 48(1-4), 879–903 (2002)
Zhanga, Y., Guoa, L., Yua, H., Zhao, K.: Fault tolerant control based on stochastic distributions via MLP neural networks. Neurocomputing 70(4–6), 867–874 (2007)
Ho, K., Leung, C.S., Sum, J.: Training RBF network to tolerate single node faults. Neurocomputing 74(6), 1046–1052 (2011)
Maka, S.K., Sum, P.F., Leung, C.S.: Regularizers for fault tolerant multilayer feedforward networks. Neurocomputing 74(11), 2028–2040 (2011)
Takanami, I., Sato, M., Yang, Y.P.: A fault-value injection approach for multiple-weight-fault tolerance of MNNs. In: Proceedings of International Joint Conference on IJCNN, p. III-515 (2000)
Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. Syst. E86-D(12), 2536–2543 (2003)
Massengill, L.W., Mundie, D.B.: An analog neural hardware implementation using charge-injection multipliers and neuron-specific gain control. IEEE Trans. Neural Netw. 3(3), 354–362 (1992)
Frye, R.C., Rietman, E.A., Wong, C.C.: Back-propagation learning and nonidealities in analog neural network hardware. IEEE Trans. Neural Netw. 2(1), 110–117 (1991)
Holt, J.L., Hwang, J.N.: Finite precision error analysis of neural network hardware implementations. IEEE Trans. Comput. 42(3), 281–290 (1993)
Mauduit, N., Duranton, M., Gobert, J., Sirat, J.A.: Lneuro 1.0: a piece of hardware LEGO for building neural network systems. IEEE Trans. Neural Netw. 3(3), 414–422 (1992)
Murakawa, M., Yoshizawa, S., et al.: The GRD chip: genetic reconfiguration of DSPs for neural network processing. IEEE Trans. Comput. 48, 628–639 (1999)
Brown, B.D., Card, H.C.: Stochastic neural computation I: computational elements. IEEE Trans. Comput. 50(9), 891–905 (2001)
Card, H.C., McNeal, D.K., McLeod, R.D.: Competitive learning algorithms and neurocomputer architecture. IEEE Trans. Comput. 47(8), 847–858 (1998)
Ninomiya, H., Asai, H.: Neural networks for digital sequential circuits. IEICE Trans. Fundam. E77-A(12), 2112–2115 (1994)
Aihara, K., Fujita, O., Uchimura, K.: A sparse memory access architecture for digital neural network LSIs. IEICE Trans. Electron. E80-C(7), 996–1002 (1997)
Morishita, T., Tamura, Y., et al.: A digital neural network coprocessor with a dynamically reconfigurable pipeline architecture. IEICE Trans. Electron. E76-C(7), 1191–1196 (1993)
Fujita, M., Kobayashi, Y., et al.: Development and fabrication of digital neural network WSIs. IEICE Trans. Electron. E76-C(7), 1182–1190 (1993)
Bettola, S., Piuri, V.: High performance fault-tolerant digital neural networks. IEEE Trans. Comput. 47(3), 357–363 (1998)
Sugawara, E., Fukushi, M., Horiguchi, S.: Self reconfigurable multi-layer neural networks with genetic algorithms. IEICE Trans. Inf. Syst. E87-D(8), 2021–2028 (2004)
Altera reliability report homepage. http://www.altera.com/literature/rr/rr.pdf
Acknowledgment
This research is supported by the university program of Altera Inc., a nice idea of a prototype FTDMLN VHDL notation created by Y. Nishimura, and other preliminary works by H. Sudo, T. Kanda, T. Murata and K. Takeuchi.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix A: Proof of Theorem 1
Appendix A: Proof of Theorem 1
Let \(X\) (\(\hat{X}\)) be the weighted sum of inputs to a neuron in the output layer when the MLP is fault free (the MLP has faulty neurons in the hidden layer and/or faulty weights). \(X\) and \(\hat{X}\) are given as follows.
where \(I_M\) is the set of neurons in the hidden layer indices. Note that the index of the weight which is connected to the target neuron in the output layer is also represented as \(i\).
where \(w_i\) (\(w'_i\)) is the value of the \(i\)-th weight when it is healthy (faulty) and \(h_i\), (\(h'_i\)) is the value of the \(i\)-th neuron in the hidden layer when it is healthy (faulty).
In addition, the following equations are true because of Assumptions 2 and 3, and \(h_k\) being 0 or 1.
Furthermore,
and
where \(N_{18}\), \(N_{19}\), and \(N_{20}\) are the number of the terms in the summations of Eqs. (18), (19), and (20), respectively. Therefore,
Thus,
Then
From this equation and the \(DL\)-\(cond(N_d)\),
-
\(\hat{X}\) \(\ge \) \(w_{max}\) \(\cdot \) \((N_d -\) \((|\hat{J} \cup \hat{L}| + |\hat{L}|))\ge 0\) (if \(t_p = 1\)),
-
\(\hat{X}\) \(<\) \(-\) \(w_{max}\) \(\cdot \) \((N_d -\) \((|\hat{J} \cup \hat{L}| + |\hat{L}|))< 0\) (if \(t_p = 0\)).
From these equations, the Eq. (5), and the use of the step function for calculating the output of each neuron, the theorem is proved.    \(\Box \)
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Horita, T., Takanami, I., Akiba, M., Terauchi, M., Kanno, T. (2015). An FPGA-Based Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Perceptron (Full Version). In: Gavrilova, M., Tan, C., Saeed, K., Chaki, N., Shaikh, S. (eds) Transactions on Computational Science XXV. Lecture Notes in Computer Science(), vol 9030. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47074-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-662-47074-9_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-47073-2
Online ISBN: 978-3-662-47074-9
eBook Packages: Computer ScienceComputer Science (R0)