Abstract
Performance of classification using Feed-Forward Backpropagation Neural Network (FFBPNN) on a reconfigurable hardware architecture is evaluated in this paper. The hardware architecture used for implementation of FFBPNN in this paper is a set of interconnected HyperCells which serve as reconfigurable data paths for the network. The architecture is easily scalable and able to implement networks with no limitation on their number of input and output dimensions. The performance of FFBPNN implemented on network of HCs using Xilinx Virtex 7 XC7V2000T as target FPGA is compared with software implementation and GPU implementation of FFBPNN. Results show speed up of 1.02X-3.49X over equivalent software implementation on Intel Core 2 Quad and 1.07X-6X over GPU (NVIDIA GTX650).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cloutier, J., Simard, P.Y.: Hardware implementation of the backpropagation without multiplication. In: Proceedings of the Fourth International Conference on Microelectronics for Neural Networks and Fuzzy Systems, pp. 46–55. IEEE (1994)
Domingos, P.O., Silva, F.M., Neto, H.C.: An efficient and scalable architecture for neural networks with backpropagation learning. In: International Conference on Field Programmable Logic and Applications, pp. 89–94. IEEE (2005)
Eldredge, J.G., Hutchings, B.L.: Rrann: a hardware implementation of the backpropagation algorithm using reconfigurable fpgas. In: IEEE International Conference on Neural Networks, IEEE World Congress on Computational Intelligence, vol. 4, pp. 2097–2102 (1994)
Madhu, K.T., Das, S., Madhava Krishna, C., Nalesh, S., Nandy, S.K., Narayan, R.: Synthesis of instruction extensions on hypercell, a reconfigurable datapath. In: International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV), pp. 215–224. IEEE (2014)
Mahoney, V., Elhanany, I.: A backpropagation neural network design using adder-only arithmetic. In: 51st Midwest Symposium on Circuits and Systems, MWSCAS, pp. 894–897. IEEE (2008)
Mohammadi, M., et al.: An accelerator for classification using radial basis function neural network. In: 28th IEEE International System-On-Chip Conference (SOCC) (2015)
Mohammadi, M., et al.: A flexible scalable hardware architecture for radial basis function neural networks. In: 28th International Conference on VLSI Design, pp. 505–510. IEEE (2015)
Ortega-Zamorano, F., et al.: Efficient implementation of the backpropagation algorithm in fpgas and microcontrollers. IEEE Trans. Neural Netw. Learn. Syst.(2015)
Kimball Presley, R., Haggard, R.L.: A fixed point implementation of the backpropagation learning algorithm. In: Southeastcon 1994. Creative Technology Transfer-A Global Affair, Proceedings of the IEEE, pp. 136–138 (1994)
Rajeswaran, N., Madhu, T., Suryakalavathi, M.: Vhdl synthesizable hardware architecture design of back propagation neural networks. In: IEEE Conference on Information and Communication Technologies (ICT), pp. 445–450 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Mohammadi, M., Ronge, R., Singapuram, S.S., Nandy, S.K. (2016). Performance Evaluation of Feed-Forward Backpropagation Neural Network for Classification on a Reconfigurable Hardware Architecture. In: Bonato, V., Bouganis, C., Gorgon, M. (eds) Applied Reconfigurable Computing. ARC 2016. Lecture Notes in Computer Science(), vol 9625. Springer, Cham. https://doi.org/10.1007/978-3-319-30481-6_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-30481-6_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30480-9
Online ISBN: 978-3-319-30481-6
eBook Packages: Computer ScienceComputer Science (R0)