Abstract
Deep neural networks (DNNs) are a state-of-the-art processing model in the field of machine learning. Implementation of DNNs into embedded systems is required to realize artificial intelligence on robots and automobiles. Embedded systems demand great processing speed and low power consumption, and DNNs require considerable processing resources. A field-programmable gate array (FPGA) is one of the most suitable devices for embedded systems because of their low power consumption, high speed processing, and reconfigurability. Autoencoders (AEs) are key parts of DNNs and comprise an input, a hidden, and an output layer. In this paper, we propose a novel hardware implementation of AEs having shared synapse architecture. In the proposed architecture, the value of each weight is shared in two interlayers between input-hidden layer and hidden-output layer. This architecture saves the limited resources of an FPGA, allowing a reduction of the synapse modules by half. Experimental results show that the proposed design can reconstruct input data and be stacked. Compared with the related works, the proposed design is register transfer level description, synthesizable, and estimated to decrease total processing time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chainer. http://chainer.org/index.html
Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Bernhard, S., John, P., Thomas, H. (eds.) Advances in Neural Information Processing Systems 19, pp. 153–160. MIT Press, Cambridge (2007)
Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., Li, L., Chen, T., Xu, Z., Sun, N.: Dadiannao: a machine-learning supercomputer. In: Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609–622 (2014)
Droniou, A., Sigaud, O.: Gated autoencoders with tied input weights. In: International Conference on Machine Learning (2013)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Jin, Y., Kim, D.: Unsupervised feature learning by pre-route simulation of auto-encoder behavior model. Int. J. Comput. Electr. Autom. Control Inf. Eng. 8(5), 668–672 (2014)
Joao, M., Joao, A., Gabriel, F., Luis, A.A.: Stacked autoencoders using low-power accelerated architectures for object recognition in autonomous system. Neural Process. Lett. 43, 1–14 (2015)
Park, S., Bong, K., Shin, D., Lee, J., Choi, S., Yoo, H.J.: A 1.93 TOPS/W scalable deep learning/inference processor with tetra-parallel MIMD architecture for big-data applications. In: IEEE International Solid-State Circuits Conference (ISSCC), pp. 80–82 (2015)
Acknowledgement
This research was supported by JSPS KAKENHI Grant Number 26330279 and 15H01706.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Suzuki, A., Morie, T., Tamukoh, H. (2016). FPGA Implementation of Autoencoders Having Shared Synapse Architecture. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9947. Springer, Cham. https://doi.org/10.1007/978-3-319-46687-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-46687-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46686-6
Online ISBN: 978-3-319-46687-3
eBook Packages: Computer ScienceComputer Science (R0)