Skip to main content

Hardware Implementation and Optimization of Tiny-YOLO Network

  • Conference paper
  • First Online:
Digital TV and Wireless Multimedia Communication (IFTC 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 815))

Abstract

Convolutional Neural Networks (CNNs) have achieved extraordinary performance in image processing fields. However, CNNs are both computational intensive and memory intensive, making them difficult to be deployed on hardware devices like embedded systems. Although lots of existing work has explored hardware implementation of CNNs, the crucial problem of either inefficient or incomplete still remains. Consequently, in this paper, we propose a design that is highly paralleled to perform efficient computation of CNNs. Furthermore, compared with previous work that rarely takes Fully-Connected (FC) layers into consideration, our work also does well in FC optimization.

We take Tiny-YOLO, an object detection architecture, as the target network to be implemented on an FPGA platform. In order to reduce computing time, we exploit an efficient and generic computing engine that has 64 duplicated Processing Elements (PEs) working simultaneously. Inside each PE, 32 MAC operations are executed in a pipeline manner for further parallelism. Then, for the purpose of reducing memory footprint, we take full advantage of data reusing and data sharing. For example, in our design, parallel PEs share the same input data and on-chip buffers are leveraged to cache data and weights for further reuse. Finally, we apply SVD to FC layers, which decreases 80.6% memory access and computing operations while maintaining comparable accuracy. With these optimizing approaches, our design achieves a detecting rate of over 20 FPS and gets a processing performance of 48 GMACS under 143 MHz working frequency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation, pp. 580–587 (2013)

    Google Scholar 

  2. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904 (2014)

    Article  Google Scholar 

  3. Girshick, R.: Fast R-CNN. Computer Science (2015)

    Google Scholar 

  4. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  5. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection, pp. 779–788 (2015)

    Google Scholar 

  6. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector, pp. 21–37 (2015)

    Google Scholar 

  7. Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S.: Speed/accuracy trade-offs for modern convolutional object detectors (2016)

    Google Scholar 

  8. Peemen, M., Setio, A.A.A., Mesman, B., Corporaal, H.: Memory-centric accelerator design for convolutional neural networks. In: IEEE International Conference on Computer Design, pp. 13–19 (2013)

    Google Scholar 

  9. Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks, pp. 161–170 (2015)

    Google Scholar 

  10. Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: Design Automation Conference, pp. 575–580 (2016)

    Google Scholar 

  11. Chen, Y.H., Krishna, T., Emer, J., Sze, V.: 14.5 eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. In: Solid-State Circuits Conference, pp. 262–263 (2016)

    Google Scholar 

  12. Finnerty, B.A., Ratigner, H.: Reduce power and cost by converting from floating point to fixed point (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by National Natural Science Foundation of China (61527804, 61301116, 61521062, 6113300961771306), Chinese National Key S&T Special Program (2013ZX01033001-002-002), the 111 Project (B07022), the Shanghai Key Laboratory of Digital Media Processing and Transmissions (STCSM 12DZ2272600).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, J., Chen, L., Gao, Z. (2018). Hardware Implementation and Optimization of Tiny-YOLO Network. In: Zhai, G., Zhou, J., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2017. Communications in Computer and Information Science, vol 815. Springer, Singapore. https://doi.org/10.1007/978-981-10-8108-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-8108-8_21

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-8107-1

  • Online ISBN: 978-981-10-8108-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics