Abstract
This paper addresses the problem of accelerating large artificial neural networks (ANN), whose topology and weights can evolve via the use of a genetic algorithm. The proposed digital hardware architecture is capable of processing any evolved network topology, whilst at the same time providing a good trade off between throughput, area and power consumption. The latter is vital for a longer battery life on mobile devices. The architecture uses multiple parallel arithmetic units in each processing element (PE). Memory partitioning and data caching are used to minimise the effects of PE pipeline stalling. A first order minimax polynomial approximation scheme, tuned via a genetic algorithm, is used for the activation function generator. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Widrow, B., Rumelhart, D.E., Lehr, M.A.: Neural Networks: Applications in Industry, Business and Science. Communications of the ACM 37, 93–105 (1994)
Fogel, D.B., Fogel, L.J., Porto, V.W.: Evolving neural networks. Biol. Cybern. 63, 487–493 (1990)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10, 99–127 (2002)
Omondi, A.R.: Neurocomputers: A Dead End? International Journal of Neural Systems 10, 475–481 (2000)
Reyneri, L.: Implementation issues of neuro-fuzzy hardware: going toward hw/sw codesign. IEEE Transactions on Neural Networks 14, 176–194 (2003)
Kung, S., Hwang, J.: A Unified Systolic Architecture for Artifical Neural Networks. Journal of Parallel and Distributed Computing 6, 358–387 (1989)
Stanley, K., Bryant, B., Miikkulainen, R.: Real-time neuroevolution in the NERO video game. IEEE Transactions on Evolutionary Computation 9, 653–668 (2005)
Gaines, B.: Stochastic Computing Systems, Advances in Information Systems Science. Plenum Press, New York (1969)
Brown, B.D., Card, H.C.: Stochastic Neural Computation I: Computational Elements. IEEE Transactions on Neural Networks 50, 891–905 (2001)
Holt, J., Hwang, J.: Finite Precision Error Analysis of Neural Network Hardware Implementations. IEEE Transactions on Computers 42, 280–291 (1993)
Koren, I.: Computer Arithmetic Algorithms, 2nd edn. A K Peters Ltd (2001)
Larkin, D., Kinane, A., Muresan, V., O’Connor, N.: An Efficient Hardware Architecture for a Neural Network Activation Function Generator. In: International Symposium on Neural Networks, Chengdu, China (2006)
Hennessy, J.L., Patterson, D.A.: Computer Architecture: A Quantitative Approach, 3rd edn. Morgan Kaufmann, San Francisco (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Larkin, D., Kinane, A., O’Connor, N. (2006). Towards Hardware Acceleration of Neuroevolution for Multimedia Processing Applications on Mobile Devices. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893295_130
Download citation
DOI: https://doi.org/10.1007/11893295_130
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46484-6
Online ISBN: 978-3-540-46485-3
eBook Packages: Computer ScienceComputer Science (R0)