Abstract
Block-Based Neural Network (BBNN) consists of a 2-D array of memory-based modular component NNs with flexible structures and internal configuration that can be implemented in reconfigurable hardware such as a field programmable gate array (FPGA). The network structure and the weights are encoded in bit strings and globally optimized using the genetic operators. An asynchronous BBNN (ABBNN), which is a new model of BBNN, enables higher performance for BBNN by utilizing the parallel computation and the pipeline architecture. An ABBNN’s operating frequency is kept stable for all scales of the network, while conventional BBNN’s decreases accordingly. The architecture of ABBNN provides the capabilities to process and analyze high sample rate data at the same time. However, optimization by the genetic algorithm is a high-cost task, and the memory access is one of the causes which degrade the training performance. In this paper, we introduce a new algorithm to reduce the memory access in BBNN optimization. ABBNN, optimized with the proposed evolutionary algorithm, is applied to general classifiers to verify the effectiveness with regards to the reduction of memory access.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Moon, S.W., Kong, S.G.: Block-based neural networks. IEEE Trans. Neural Netw. 12(2), 307–317 (2001)
Sahin, S., Becerikli, Y., Yazici, S.: Neural network implementation in hardware using FPGAs. In: Proceedings of the 13th International Conference on Neural Information Processing, vol. 3, pp. 1105–1112 (2006)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of NIPS, pp. 1106–1114 (2012)
Mohammed, E.Z., Ali, H.K.: Hardware implementation of artificial neural network using field programmable gate array. Int. J. Comput. Theor. Eng. 5, 795 (2013). https://doi.org/10.7763/IJCTE.2013.V5.795
Sze, V., Chen, Y.-H., Yang, T.-J., Emer, J.: Efficient processing of deep neural networks: a tutorial and survey. https://arxiv.org/abs/1703.09039
Merchant, S., Peterson, G.D., Park, S.K., Kong, S.G.: FPGA implementation of evolvable block-based neural networks. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 3129–3136 (2006)
Kothandaraman, S.K.: Implementation of block-based neural networks on reconfigurable computing platforms. Master’s thesis, University of Tennessee (2004). http://trace.tennessee.edu/utk_gradthes/2268
Merchant, S., Peterson, G.D., Kong, S.G.: Intrinsic embedded hardware evolution of block-based neural networks. In: Proceedings of Engineering of Reconfigurable Systems and Algorithms (ERSA) (2006)
Niknam, A., Hoseini, P., Mashoufi, B., Khoei, A.: A novel evolutionary algorithm for block-based neural network training. In: Proceedings of the First Iranian Conference on Pattern Recognition and Image Analysis (PRIA), pp. 1-6 (2013)
Higuchi, T., et al.: Evolvable hardware with genetic learning. In: Proceedings of IEEE International Symposium on Circuits and Systems, vol. 4, pp. 29–32 (1997)
Satoh, H., Yamamura, M., Kobayashi, S.: Minimal generation gap model for GAs considering both exploration and exploitation. In: Proceedings of 4th International Conference on Soft Computing, Iizuka, 30 September–5 October 1996, pp. 494–497 (1996)
Goldberg, D., Thierens, D.: Elitist recombination: an integrated selection recombination GA. In: First IEEE World Congress on Computational Intelligence, vol. 1, pp. 508–512 (1994)
Kong, S.G.: Time series prediction with evolvable block-based neural networks. In: Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN-2004) (2004)
Jiang, W., Kong, S.G., Peterson, G.D.: ECG signal classification using block-based neural networks. In: Proceedings of International Joint Conference on Neural Networks, Montreal, Canada (2005)
Tran, Q.A., Jiang, F., Ha, Q.M.: Evolving block-based neural network and field programmable gate arrays for host-based intrusion detection system. In: 2012 Fourth International Conference on Knowledge and Systems Engineering (KSE). IEEE (2012)
Lee, K., Hamagami, T.: High performance block-based neural network model by pipelined parallel communication. IEEE J. Trans. Electron. Inf. Syst. 139, 1059–1065 (2019). (in Japanese)
Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
Roberto, L.: UCI Machine Learning Repository. Intelnics (2014). http://archive.ics.uci.edu/ml
Yeh, I.-C.: Modeling slump flow of concrete using second-order regressions and artificial neural networks. Cem. Concr. Compos. 29(6), 474–480 (2007)
Cortez, P., Morais, A.: A data mining approach to predict forest fires using meteorological data. In: Neves, J., Santos, M.F., Machado, J. (eds.) Proceedings of the 13th EPIA, New Trends in Artificial Intelligence (2007)
Carla, B.: UCI Machine Learning Repository. Vision Group, University of Massachusetts (1990). http://archive.ics.uci.edu/ml
Roberto, L.: Yacht hydrodynamics data set. UCI Machine Learning Repository (2013). http://archive.ics.uci.edu/ml
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lee, K., Hamagami, T. (2020). Block-Based Neural Network High Speed Optimization. In: Sato, H., Iwanaga, S., Ishii, A. (eds) Proceedings of the 23rd Asia Pacific Symposium on Intelligent and Evolutionary Systems. IES 2019. Proceedings in Adaptation, Learning and Optimization, vol 12. Springer, Cham. https://doi.org/10.1007/978-3-030-37442-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-37442-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37441-9
Online ISBN: 978-3-030-37442-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)