Skip to main content
Log in

FPGA Implementation of Neurocomputational Models: Comparison Between Standard Back-Propagation and C-Mantec Constructive Algorithm

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Recent advances in FPGA technology have permitted the implementation of neurocomputational models, making them an interesting alternative to standard PCs in order to speed up the computations involved taking advantage of the intrinsic FPGA parallelism. In this work, we analyse and compare the FPGA implementation of two neural network learning algorithms: the standard and well known Back-Propagation algorithm and C-Mantec, a constructive neural network algorithm that generates compact one hidden layer architectures with good predictive capabilities. One of the main differences between both algorithms is the fact that while Back-Propagation needs a predefined architecture, C-Mantec constructs its network while learning the input patterns. Several aspects of the FPGA implementation of both algorithms are analyzed, focusing in features like logic and memory resources needed, transfer function implementation, computation time, etc. The advantages and disadvantages of both methods in relationship to their hardware implementations are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Akin E, Aydin I, Karakose M (2011) FPGA based intelligent condition monitoring of induction motors: detection, diagnosis, and prognosis. In: Proceedings IEEE International Conference on Industrial Technology (ICIT 2011), pp 373–378. doi:10.1109/ICIT.2011.5754405

  2. Ashenden P (2008) The designer’s guide to VHDL, vol 3, 3rd edn. Morgan Kaufmann Publishers Inc., San Francisco (Systems on Silicon)

    MATH  Google Scholar 

  3. Bacon D, Rabbah R, Shukla S (2013) FPGA programming for the masses. Queue 11:40–52

    Article  Google Scholar 

  4. Chappell GJ, Lee J, Taylor JG (1992) A review of medical diagnostic applications of neural networks. Springer, London

    Book  Google Scholar 

  5. Chu PP (2008) FPGA prototyping by VHDL Examples: Xilinx Spartan-3 version. Wiley, New York

    Book  Google Scholar 

  6. Conmy P, Bate I (2010) Component-based safety analysis of FPGAs. IEEE Trans Industr Inform 6(2):195–205

    Article  Google Scholar 

  7. Cui H, Zhang H, Ganger GR, Gibbons PB, Xing EP (2016) Geeps: scalable deep learning on distributed gpus with a gpu-specialized parameter server. In: ACM European conference on computer systems (EuroSys’16)

  8. Franco L, Elizondo D, Jerez J (2009) Constructive neural networks. Springer, Berlin

    Book  MATH  Google Scholar 

  9. Frean M (1990) The upstart algorithm: a method for constructing and training feedforward neural networks. Neural Comput 2(2):198–209

    Article  Google Scholar 

  10. Gómez I, Franco L, Jerez JM (2009) Neural network architecture selection: Can function complexity help? Neural Process Lett 30(2):71–87

    Article  Google Scholar 

  11. Gomperts A, Ukil A, Zurfluh F (2011) Development and implementation of parameterized FPGA-based general purpose neural networks for online applications. IEEE Trans Industr Inform 7(1):78–89

    Article  Google Scholar 

  12. Gu R, Shen F, Huang Y (2013) A parallel computing platform for training large scale neural networks. In: 2013 IEEE international conference on big data, pp 376–384

  13. Guresen E, Kayakutlu G, Daim TU (2011) Using artificial neural network models in stock market index prediction. Expert Syst Appl 38(8):10,389–10,397

    Article  Google Scholar 

  14. Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  15. Karayiannis NB, Venetsanopoulos AN (1993) Applications of neural networks: a review. Springer, Boston

    MATH  Google Scholar 

  16. Kilts S (2007) Advanced FPGA design: architecture, implementation, and optimization. Wiley-IEEE Press, New York

    Book  Google Scholar 

  17. Le Q, Jeon J (2010) Neural-network-based low-speed-damping controller for stepper motor with an FPGA. IEEE Trans Industr Appl 57:3167–3180

    Article  Google Scholar 

  18. Mehrotra K, Mohan CK, Ranka S (1997) Elements of artificial neural networks. MIT Press, Cambridge

    MATH  Google Scholar 

  19. Meireles MRG, Almeida PEM, Simoes MG (2003) A comprehensive review for industrial applicability of artificial neural networks. IEEE Trans Industr Electron 50(3):585–601

    Article  Google Scholar 

  20. Hawkins M (2003) A comprehensive. IEEE Trans Industr Electron 50(3):585–601

    Article  Google Scholar 

  21. Monmasson E, Idkhajine L, Cirstea M, Bahri I, Tisan A, Naouar MW (2011) FPGAs in industrial control applications. IEEE Trans Industr Inform 7(2):224–243

    Article  Google Scholar 

  22. Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E (2015) Deep learning applications and challenges in big data analytics. J Big Data 2(1):1–21

    Article  Google Scholar 

  23. Omondi A, Rajapakse J (2006) FPGA implementations of neural networks. Springer, New York

    Book  Google Scholar 

  24. Orlowska-Kowalska T, Kaminski M (2011) FPGA implementation of the multilayer neural network for the speed estimation of the two-mass drive system. IEEE Trans Industr Inform 7(3):436–445

    Article  Google Scholar 

  25. Ortega-Zamorano F, Jerez J, Franco L (2014) FPGA implementation of the C-Mantec neural network constructive algorithm. IEEE Trans Industr Inform 10(2):1154–1161

    Article  Google Scholar 

  26. Ortega-Zamorano F, Jerez J, Juarez G, Franco L (2015) FPGA implementation comparison between C-Mantec and Back-Propagation neural network algorithms. In: Lecture notes in computer science, vol 9095

  27. Ortega-Zamorano F, Jerez J, Juarez G, Perez J, Franco L (2014) High precision FPGA implementation of neural network activation functions. In: 2014 IEEE symposium on intelligent embedded systems (IES), pp 55–60

  28. Ortega-Zamorano F, Jerez JM, Urda-Munoz D, Luque-Baena RM, Franco L (2016) Efficient implementation of the backpropagation algorithm in FPGAs and microcontrollers. IEEE Trans Neural Netw Learn Syst 27:1840–1850

    Article  MathSciNet  Google Scholar 

  29. Putnam A, Caulfield AM, Chung ES, Chiou D, Constantinides K, Demme J, Esmaeilzadeh H, Fowers J, Gopal GP, Gray J, Haselman M, Hauck S, Heil S, Hormati A, Kim JY, Lanka S, Larus J, Peterson E, Pope S, Smith A, Thong J, Xiao PY, Burger D (2014) A reconfigurable fabric for accelerating large-scale datacenter services. In: Proceeding of the 41st annual international symposium on computer architecuture, ISCA ’14. IEEE Press, Piscataway, pp 13–24. http://dl.acm.org/citation.cfm?id=2665671.2665678

  30. Reed RD, Marks RJ (1998) Neural smithing: supervised learning in feedforward artificial neural networks. MIT Press, Cambridge

    Google Scholar 

  31. Rumelhart D, Hinton G, Williams R (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

    Article  MATH  Google Scholar 

  32. Savich A, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252

    Article  Google Scholar 

  33. Serbedzija NB (1996) Simulating artificial neural networks on parallel architectures. Computer 29(3):56–63

    Article  Google Scholar 

  34. Subirats JL, Franco L, Jerez JM (2012) C-Mantec: a novel constructive neural network algorithm incorporating competition between neurons. Neural Netw 26:130–140

    Article  Google Scholar 

  35. Tiwari V, Khare N (2015) Hardware implementation of neural network with sigmoidal activation functions using cordic. Microprocess Microsyst 39(6):373–381. doi:10.1016/j.micpro.2015.05.012

    Article  Google Scholar 

  36. Valtierra-Rodriguez M, Osornio-Rios R, Garcia-Perez A, Romero-Troncoso R (2013) FPGA-based neural network harmonic estimation for continuous monitoring of the power line in industrial applications. Electr Power Syst Res 98:51–57. doi:10.1016/j.epsr.2013.01.011

    Article  Google Scholar 

  37. Werbos PJ (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard University

  38. Yasunaga M, Yoshida E (1998) Optimization of parallel BP implementation: training speed of 1056 mcups on the massive. In: The 1998 IEEE international joint conference on neural networks proceedings, 1998. IEEE world congress on computational intelligence, vol 1, pp 563–568

  39. Zhu J, Sutton P (2003) FPGA implementations of neural networks—a survey of a decade of progress. Lect Notes Comput Sci 2778:1062–1066

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge support from Junta de Andalucía (Secretaría General de Universidades, Investigación y Tecnología) through Grant P10-TIC-5770, and from MINECO (Spain) through Grants TIN2010-16556 and TIN2014-58516-c2-1-R (all including FEDER funds).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leonardo Franco.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ortega-Zamorano, F., Jerez, J.M., Juárez, G.E. et al. FPGA Implementation of Neurocomputational Models: Comparison Between Standard Back-Propagation and C-Mantec Constructive Algorithm. Neural Process Lett 46, 899–914 (2017). https://doi.org/10.1007/s11063-017-9655-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9655-x

Keywords

Navigation