Skip to main content

FPGA Implementation of Autoencoders Having Shared Synapse Architecture

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9947))

Included in the following conference series:

Abstract

Deep neural networks (DNNs) are a state-of-the-art processing model in the field of machine learning. Implementation of DNNs into embedded systems is required to realize artificial intelligence on robots and automobiles. Embedded systems demand great processing speed and low power consumption, and DNNs require considerable processing resources. A field-programmable gate array (FPGA) is one of the most suitable devices for embedded systems because of their low power consumption, high speed processing, and reconfigurability. Autoencoders (AEs) are key parts of DNNs and comprise an input, a hidden, and an output layer. In this paper, we propose a novel hardware implementation of AEs having shared synapse architecture. In the proposed architecture, the value of each weight is shared in two interlayers between input-hidden layer and hidden-output layer. This architecture saves the limited resources of an FPGA, allowing a reduction of the synapse modules by half. Experimental results show that the proposed design can reconstruct input data and be stacked. Compared with the related works, the proposed design is register transfer level description, synthesizable, and estimated to decrease total processing time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Caffe. http://caffe.berkeleyvision.org/

  2. Chainer. http://chainer.org/index.html

  3. Theano. http://deeplearning.net/software/theano/

  4. Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Bernhard, S., John, P., Thomas, H. (eds.) Advances in Neural Information Processing Systems 19, pp. 153–160. MIT Press, Cambridge (2007)

    Google Scholar 

  6. Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., Li, L., Chen, T., Xu, Z., Sun, N.: Dadiannao: a machine-learning supercomputer. In: Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609–622 (2014)

    Google Scholar 

  7. Droniou, A., Sigaud, O.: Gated autoencoders with tied input weights. In: International Conference on Machine Learning (2013)

    Google Scholar 

  8. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Jin, Y., Kim, D.: Unsupervised feature learning by pre-route simulation of auto-encoder behavior model. Int. J. Comput. Electr. Autom. Control Inf. Eng. 8(5), 668–672 (2014)

    Google Scholar 

  10. Joao, M., Joao, A., Gabriel, F., Luis, A.A.: Stacked autoencoders using low-power accelerated architectures for object recognition in autonomous system. Neural Process. Lett. 43, 1–14 (2015)

    Google Scholar 

  11. Park, S., Bong, K., Shin, D., Lee, J., Choi, S., Yoo, H.J.: A 1.93 TOPS/W scalable deep learning/inference processor with tetra-parallel MIMD architecture for big-data applications. In: IEEE International Solid-State Circuits Conference (ISSCC), pp. 80–82 (2015)

    Google Scholar 

Download references

Acknowledgement

This research was supported by JSPS KAKENHI Grant Number 26330279 and 15H01706.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akihiro Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Suzuki, A., Morie, T., Tamukoh, H. (2016). FPGA Implementation of Autoencoders Having Shared Synapse Architecture. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9947. Springer, Cham. https://doi.org/10.1007/978-3-319-46687-3_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46687-3_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46686-6

  • Online ISBN: 978-3-319-46687-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics