Skip to main content

A Hardware Accelerator for Convolutional Neural Network Using Fast Fourier Transform

  • Conference paper
  • First Online:
VLSI Design and Test (VDAT 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 892))

Included in the following conference series:

Abstract

Convolutional Neural Networks (CNN) are biologically inspired architectures which can be trained to perform various classification tasks. CNNs typically consists of convolutional layers, max pooling layers, followed by dense fully connected layers. Convolutional layer is the compute intensive layer in CNNs. In this paper we present FFT (Fast Fourier Transform) based convolution technique for accelerating CNN architecture. Computational complexity of direct convolution and FFT convolution are evaluated and compared. Also we present an efficient FFT architecture based on radix-4 butterfly for convolution. For validating our analysis we have implemented a convolutional layer in Virtex-7 FPGA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: Proceedings of the ASP DAC, pp. 575–580 (2016)

    Google Scholar 

  2. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  3. Solazzo, A., Del Sozzo, E., De Rose, I., De Silvestri, M.: Hardware design automation of convolutional neural networks. In: IEEE ISVLSI (2016)

    Google Scholar 

  4. Peemen, M., Setio, A.A.A., Mesman, B., Corporaal, H.: Memory-centric accelerator design for convolutional neural networks. In: ICCD (2013)

    Google Scholar 

  5. Farabet, C., Poulet, C., Han, J.Y., LeCun, Y.: CNP: an FPGA based processor for convolutional networks. In: FPL (2009)

    Google Scholar 

  6. Ma, Y., Suda, N., Cao, Y., Seo, J.-S., Vrudhula, S.: Scalable and modularized RTL compilation of convolutional neural networks onto FPGA. In: FPL (2016)

    Google Scholar 

  7. Li, H., et al.: A high performance FPGA-based accelerator for large-scale convolutional neural networks. In: FPL (2016)

    Google Scholar 

  8. Qiu, J., et al.: Going deeper with embedded FPGA platform for convolutional neural network. In: FPGA (2016)

    Google Scholar 

  9. Zhang, C., et al.: Optimizing FPGA-based accelerator design for deep convolutional neural networks (2015)

    Google Scholar 

  10. Farabet, C., et al.: Hardware accelerated convolutional neural networks for synthetic vision systems. In: ISCAS (2010)

    Google Scholar 

  11. Chen, Y., et al.: DaDianNao: a machine-learning supercomputer. In: IEEE/ACM International Symposium on Microarchitecture, pp. 602–622 (2014)

    Google Scholar 

  12. Du, Z., Fasthuber, R.: ShiDianNao: shifting vision processing closer to the sensor. In: ACM International Symposium Computer Architecture (ISCA) (2015)

    Google Scholar 

  13. Kang, L., Kumar, J., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for document image classification. In: 22nd International Conference on Pattern Recognition (2014)

    Google Scholar 

  14. Matthias, G.P.: Ristretto: hardware-oriented approximation of convolutional neural networks. MSc thesis, UC Davis (2016)

    Google Scholar 

  15. Kala, S., Nalesh, S., Maity, A., Nandy, S.K., Narayan, R.: High throughput, low latency, memory optimized 64K point FFT architecture using novel radix-4 butterfly unit. In: IEEE International Symposium on Circuits and Systems, ISCAS, pp. 3034–3037 (2013)

    Google Scholar 

  16. Chakradhar, S., Sankaradas, M., Jakkula, V., Cadambi, S.: A dynamically configurable coprocessor for convolutional neural networks: In: ACM SIGARCH Computer Architecture News, vol. 38, pp. 247–257. ACM (2010)

    Google Scholar 

  17. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)

    Article  Google Scholar 

  18. Szegedy, C., Reed, S., Sermanet, P., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions, pp. 1–12 (2014)

    Google Scholar 

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  20. Mathieu, M., Henaff, M.: Fast training of convolutional networks through FFTs. In: ICLR (2014)

    Google Scholar 

  21. Abtahi, T., Kulkarni, A., Mohsenin, T.: Accelerating convolutional neural network with FFT on tiny cores. In: ISCAS (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Kala .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kala, S., Jose, B.R., Paul, D., Mathew, J. (2019). A Hardware Accelerator for Convolutional Neural Network Using Fast Fourier Transform. In: Rajaram, S., Balamurugan, N., Gracia Nirmala Rani, D., Singh, V. (eds) VLSI Design and Test. VDAT 2018. Communications in Computer and Information Science, vol 892. Springer, Singapore. https://doi.org/10.1007/978-981-13-5950-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-5950-7_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-5949-1

  • Online ISBN: 978-981-13-5950-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics