Skip to main content

An FPGA-Based Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Perceptron (Full Version)

  • Chapter
  • First Online:
Transactions on Computational Science XXV

Part of the book series: Lecture Notes in Computer Science ((TCOMPUTATSCIE,volume 9030))

  • 595 Accesses

Abstract

A method to implement a digital multilayer perceptron (DMLP) in an FPGA is proposed, where the DMLP is tolerant to simultaneous weight and neuron faults. It has been shown in [1] that a multilayer perceptron (MLP) which has successfully trained using the deep learning method is tolerant to multiple weight and neuron faults where the weight faults are between the hidden and output layers, and the neuron faults are in the hidden layer. Using this fact, a set of weights in the trained MLP is installed in an FPGA to cope with these faults. Further, the neuron faults in the output layer are detected or corrected using SECDED code. The above process is done as follows. The generator developed by us automatically outputs a VHDL source file which describes the perceptron using a set of weight values in the MLP trained by the deep learning method. The VHDL file obtained is input to the logic design software Quartus II of Altera Inc., and then, implemented in an FPGA. The process is applied to realizing fault-tolerant DMLPs for character recognitions as concrete examples. Then, the perceptrons to be made fault-tolerant and corresponding non-redundant ones not to be made fault-tolerant are compared in terms of not only reliability and fault rate but also hardware size, computing speed and electricity consumption. The data show that the fault rate of the fault-tolerant perceptron can be significantly decreased than that of the corresponding non-redundant one.

This paper is the full version of [2].

Itsuo Takanami—Retired.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. Syst. E91-D(4), 1168–1175 (2008)

    Google Scholar 

  2. Horita, T., Takanami, I.: An FPGA-based multiple-weight-and-neuron-fault tolerant digital multilayer perceptron. Neurocomputing 99, 570–574 (2013). Elsevier

    Article  Google Scholar 

  3. Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)

    Article  Google Scholar 

  4. Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by Fahlman, S.E., et al. at the Computer Science Department, Carnegie Mellon University

    Google Scholar 

  5. Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proceedings of International Symposium on FTCS, pp. 228–235 (1990)

    Google Scholar 

  6. Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993). (in Japanese)

    Google Scholar 

  7. Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvement. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)

    Article  Google Scholar 

  8. Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf. Syst. E81-D(1), 115–123 (1998)

    Google Scholar 

  9. Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proceedings of International Joint Conference on Neural Networks, pp. 2656–2660 (2001)

    Google Scholar 

  10. Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. Syst. E84-D(7), 899–905 (2001)

    Google Scholar 

  11. Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proceedings of International Joint Conference on Neural Networks, pp. I-769–I-774 (1992)

    Google Scholar 

  12. Cavalieri, S., Mirabella, O.: A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw. (Pergamon) 12(1), 91–106 (1999)

    Article  Google Scholar 

  13. Demidenko, S., Piuri, V.: Concurrent diagnosis in digital implementations of neural networks. Neurocomputing 48(1-4), 879–903 (2002)

    Article  MATH  Google Scholar 

  14. Zhanga, Y., Guoa, L., Yua, H., Zhao, K.: Fault tolerant control based on stochastic distributions via MLP neural networks. Neurocomputing 70(4–6), 867–874 (2007)

    Article  Google Scholar 

  15. Ho, K., Leung, C.S., Sum, J.: Training RBF network to tolerate single node faults. Neurocomputing 74(6), 1046–1052 (2011)

    Article  Google Scholar 

  16. Maka, S.K., Sum, P.F., Leung, C.S.: Regularizers for fault tolerant multilayer feedforward networks. Neurocomputing 74(11), 2028–2040 (2011)

    Article  Google Scholar 

  17. Takanami, I., Sato, M., Yang, Y.P.: A fault-value injection approach for multiple-weight-fault tolerance of MNNs. In: Proceedings of International Joint Conference on IJCNN, p. III-515 (2000)

    Google Scholar 

  18. Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. Syst. E86-D(12), 2536–2543 (2003)

    Google Scholar 

  19. Massengill, L.W., Mundie, D.B.: An analog neural hardware implementation using charge-injection multipliers and neuron-specific gain control. IEEE Trans. Neural Netw. 3(3), 354–362 (1992)

    Article  Google Scholar 

  20. Frye, R.C., Rietman, E.A., Wong, C.C.: Back-propagation learning and nonidealities in analog neural network hardware. IEEE Trans. Neural Netw. 2(1), 110–117 (1991)

    Article  Google Scholar 

  21. Holt, J.L., Hwang, J.N.: Finite precision error analysis of neural network hardware implementations. IEEE Trans. Comput. 42(3), 281–290 (1993)

    Article  Google Scholar 

  22. Mauduit, N., Duranton, M., Gobert, J., Sirat, J.A.: Lneuro 1.0: a piece of hardware LEGO for building neural network systems. IEEE Trans. Neural Netw. 3(3), 414–422 (1992)

    Article  Google Scholar 

  23. Murakawa, M., Yoshizawa, S., et al.: The GRD chip: genetic reconfiguration of DSPs for neural network processing. IEEE Trans. Comput. 48, 628–639 (1999)

    Article  Google Scholar 

  24. Brown, B.D., Card, H.C.: Stochastic neural computation I: computational elements. IEEE Trans. Comput. 50(9), 891–905 (2001)

    Article  MathSciNet  Google Scholar 

  25. Card, H.C., McNeal, D.K., McLeod, R.D.: Competitive learning algorithms and neurocomputer architecture. IEEE Trans. Comput. 47(8), 847–858 (1998)

    Article  Google Scholar 

  26. Ninomiya, H., Asai, H.: Neural networks for digital sequential circuits. IEICE Trans. Fundam. E77-A(12), 2112–2115 (1994)

    Google Scholar 

  27. Aihara, K., Fujita, O., Uchimura, K.: A sparse memory access architecture for digital neural network LSIs. IEICE Trans. Electron. E80-C(7), 996–1002 (1997)

    Google Scholar 

  28. Morishita, T., Tamura, Y., et al.: A digital neural network coprocessor with a dynamically reconfigurable pipeline architecture. IEICE Trans. Electron. E76-C(7), 1191–1196 (1993)

    Google Scholar 

  29. Fujita, M., Kobayashi, Y., et al.: Development and fabrication of digital neural network WSIs. IEICE Trans. Electron. E76-C(7), 1182–1190 (1993)

    Google Scholar 

  30. Bettola, S., Piuri, V.: High performance fault-tolerant digital neural networks. IEEE Trans. Comput. 47(3), 357–363 (1998)

    Article  Google Scholar 

  31. Sugawara, E., Fukushi, M., Horiguchi, S.: Self reconfigurable multi-layer neural networks with genetic algorithms. IEICE Trans. Inf. Syst. E87-D(8), 2021–2028 (2004)

    Google Scholar 

  32. Altera reliability report homepage. http://www.altera.com/literature/rr/rr.pdf

Download references

Acknowledgment

This research is supported by the university program of Altera Inc., a nice idea of a prototype FTDMLN VHDL notation created by Y. Nishimura, and other preliminary works by H. Sudo, T. Kanda, T. Murata and K. Takeuchi.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tadayoshi Horita .

Editor information

Editors and Affiliations

Appendix A: Proof of Theorem 1

Appendix A: Proof of Theorem 1

Let \(X\) (\(\hat{X}\)) be the weighted sum of inputs to a neuron in the output layer when the MLP is fault free (the MLP has faulty neurons in the hidden layer and/or faulty weights). \(X\) and \(\hat{X}\) are given as follows.

$$\begin{aligned} X= \sum _{i\in I_M} w_ih_i \end{aligned}$$
(16)

where \(I_M\) is the set of neurons in the hidden layer indices. Note that the index of the weight which is connected to the target neuron in the output layer is also represented as \(i\).

$$\begin{aligned} \hat{X}&= \sum _{k \notin \hat{L}, k \notin \hat{J}}w_k \cdot h_k + \sum _{k \in \hat{L}, k \notin \hat{J}}w'_k \cdot h_k \nonumber \\&\quad + \sum _{k \notin \hat{L}, k \in \hat{J}}w_k \cdot h'_k + \sum _{k \in \hat{L}, k \in \hat{J}}w'_k \cdot h'_k \end{aligned}$$
(17)

where \(w_i\) (\(w'_i\)) is the value of the \(i\)-th weight when it is healthy (faulty) and \(h_i\), (\(h'_i\)) is the value of the \(i\)-th neuron in the hidden layer when it is healthy (faulty).

From the Eqs. (16) and (17),

$$\begin{aligned} \hat{X} -X&= \sum _{k \in \hat{L}, k \notin \hat{J}}(w'_k -w_k) \cdot h_k \end{aligned}$$
(18)
$$\begin{aligned}&\quad + \sum _{k \notin \hat{L}, k \in \hat{J}}w_k \cdot (h'_k -h_k) \end{aligned}$$
(19)
$$\begin{aligned}&\quad + \sum _{k \in \hat{L}, k \in \hat{J}}(w'_k \cdot h'_k-w_k \cdot h_k) \end{aligned}$$
(20)

In addition, the following equations are true because of Assumptions 2 and 3, and \(h_k\) being 0 or 1.

$$-2 \cdot w_{max} \le (w'_k -w_k) \cdot h_k \le 2 \cdot w_{max}$$
$$\begin{aligned} -w_{max} \le w_k \cdot (h'_k -h_k) \le w_{max} \end{aligned}$$
$$-2 \cdot w_{max} \le (w'_k \cdot h'_k-w_k \cdot h_k) \le 2 \cdot w_{max}$$

Furthermore,

$$N_{18} + N_{20} = |\hat{L}|,$$
$$N_{19} + N_{20} = |\hat{J}|,$$

and

$$N_{20}=|\hat{J} \cap \hat{L}|,$$

where \(N_{18}\), \(N_{19}\), and \(N_{20}\) are the number of the terms in the summations of Eqs. (18), (19), and (20), respectively. Therefore,

$$\begin{aligned} 2N_{18}+N_{19}+2N_{20}= & {} |\hat{J}|+2|\hat{L}|-|\hat{J} \cap \hat{L}| \\= & {} |\hat{J} \cup \hat{L}| + |\hat{L}| \nonumber \end{aligned}$$

Thus,

$$-w_{max}\cdot (|\hat{J} \cup \hat{L}| + |\hat{L}|)\le (\hat{X} - X)\le w_{max}\cdot (|\hat{J} \cup \hat{L}| + |\hat{L}|).$$

Then

$$X-w_{max}\cdot (|\hat{J} \cup \hat{L}| + |\hat{L}|)\le \hat{X}\le X + w_{max}\cdot (|\hat{J} \cup \hat{L}| + |\hat{L}|).$$

From this equation and the \(DL\)-\(cond(N_d)\),

  • \(\hat{X}\) \(\ge \) \(w_{max}\) \(\cdot \) \((N_d -\) \((|\hat{J} \cup \hat{L}| + |\hat{L}|))\ge 0\) (if \(t_p = 1\)),

  • \(\hat{X}\) \(<\) \(-\) \(w_{max}\) \(\cdot \) \((N_d -\) \((|\hat{J} \cup \hat{L}| + |\hat{L}|))< 0\) (if \(t_p = 0\)).

From these equations, the Eq. (5), and the use of the step function for calculating the output of each neuron, the theorem is proved.    \(\Box \)

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Horita, T., Takanami, I., Akiba, M., Terauchi, M., Kanno, T. (2015). An FPGA-Based Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Perceptron (Full Version). In: Gavrilova, M., Tan, C., Saeed, K., Chaki, N., Shaikh, S. (eds) Transactions on Computational Science XXV. Lecture Notes in Computer Science(), vol 9030. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47074-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-47074-9_9

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-47073-2

  • Online ISBN: 978-3-662-47074-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics