Skip to main content

A Super-Vector Deep Learning Coprocessor with High Performance-Power Ratio

  • Chapter
  • First Online:
Advanced Topics in Intelligent Information and Database Systems (ACIIDS 2017)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 710))

Included in the following conference series:

  • 1279 Accesses

Abstract

The maturity of deep learning theory and the development of computers have made deep learning algorithms powerful tools for mining the underlying features of big data. There is an increasing demand of high-accuracy and real-time object detection for intelligent communication and control tasks in embedded systems. More energy efficient deep learning accelerators are required because of the limited battery and resources in embedded systems. We propose a Super-Vector Coprocessor architecture called SVP-DL. SVP-DL can process various matrix operations used in deep learning algorithms by calculating multidimensional vectors using specified vector and scalar instructions, which enabling flexible combinations of matrix operations and data organization. We verified SVP-DL on a self-developed field-programmable gate array (FPGA) platform. The typical deep belief network and the sparse coding network are programmed on the coprocessor. Experiments results showed that SVP-DL architecture on FPGA can achieve 1.7 to 2.1 times the performance under a low frequency compared with that on a PC platform. SVP-DL on FPGA can also achieve about 9 times the performance-power efficiency of a PC.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kim, S., McMahon, P., Olukotun, K.: A large-scale architecture for restricted boltzmann machines. In: 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2010, pp. 201–208. IEEE (2010)

    Google Scholar 

  2. Chen, T., Du, Z., Sun, N., Wang, J., Wu, C., Chen, Y., Temam, O.: Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. In: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 269–284. ACM (2014)

    Google Scholar 

  3. Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing fpga-based accelerator design for deep convolutional neural networks: an analytical approach based on roofline model. In: Proceedings of FPGA (2015)

    Google Scholar 

  4. Hinton, G., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Lee, H., Ekanadham, C., Ng, A.: Sparse deep belief net model for visual area v2. Adv. Neural Inf. Process. Syst. 20, 873–880 (2008)

    Google Scholar 

  6. Jiang, J., Hu, R., Mikel, L., Dou, Y.: Accuracy evaluation of deep belief networks with fixed-point arithmetic. COMPUTER MODELLING and NEW TECHNOLOGIES 6, 7–14 (2014)

    Google Scholar 

  7. Sch\(\ddot{o}\)lkopf, B., Platt, J., Hofmann, T.: Efficient Sparse Coding Algorithms. MIT Press (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingfei Jiang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Jiang, J., Liu, Z., Xu, J., Hu, R. (2017). A Super-Vector Deep Learning Coprocessor with High Performance-Power Ratio. In: Król, D., Nguyen, N., Shirai, K. (eds) Advanced Topics in Intelligent Information and Database Systems. ACIIDS 2017. Studies in Computational Intelligence, vol 710. Springer, Cham. https://doi.org/10.1007/978-3-319-56660-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-56660-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-56659-7

  • Online ISBN: 978-3-319-56660-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics