Skip to main content

A Hardware-Oriented Dropout Algorithm for Efficient FPGA Implementation

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10639))

Included in the following conference series:

Abstract

This paper proposes a hardware oriented dropout algorithm for an efficient field-programmable gate array (FPGA) implementation. Dropout is a regularization technique, which is commonly used in neural networks such as multilayer perceptrons (MLPs), convolutional neural networks (CNNs), among others. To generate a dropout mask to randomly drop neurons during training phase, random number generators (RNGs) are usually used in software implementations. However, RNGs consume considerable FPGA resources in hardware implementations. The proposed method is able to minimize the resources required for FPGA implementation of dropout by performing a simple rotation operation to a predefined dropout mask. We apply the proposed method to MLPs and CNNs and evaluate them on MNIST and CIFAR-10 classification. In addition, we employ the proposed method in GoogLeNet training using own dataset to develop a vision system for home service robots. The experimental results demonstrate that the proposed method achieves the same regularized effect as the ordinary dropout algorithm. Logic synthesis results show that the proposed method significantly reduces the consumption of FPGA resources in comparison to the ordinary RNG-based approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Networks 61, 85–117 (2015)

    Article  Google Scholar 

  2. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  3. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  4. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  5. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MATH  MathSciNet  Google Scholar 

  6. Omondi, A.R., Rajapakse, J.C. (eds.): FPGA Implementations of Neural Networks, vol. 365. Springer, New York (2006). doi:10.1007/0-387-28487-7

    Google Scholar 

  7. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)

  8. Wu, H., Gu, X.: Towards dropout training for convolutional neural networks. Neural Networks 71, 1–10 (2015)

    Article  Google Scholar 

  9. Wan, L., Zeiler, M., Zhang, S., Cun, Y.L., Fergus, R.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th international conference on machine learning (ICML-2013), pp. 1058–1066 (2013)

    Google Scholar 

  10. Bonde, V.V., Kale, A.: Design and implementation of a random number generator on FPGA. Int. J. Sci. Res. 4(5), 203–208 (2015)

    Google Scholar 

  11. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  12. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  13. Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-generation open source framework for deep learning. In: Proceedings of Workshop on Machine Learning Systems (LearningSys) in the Twenty-Ninth Annual Conference on Neural Information Processing Systems (NIPS), vol. 5 (2015)

    Google Scholar 

  14. Robocup@home. http://www.robocupathome.org/

Download references

Acknowledgments

This research was supported by JSPS KAKENHI Grant Numbers 17H01798, 17K20010, 26330279, and 15H01706.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoeng Jye Yeoh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Yeoh, Y.J., Morie, T., Tamukoh, H. (2017). A Hardware-Oriented Dropout Algorithm for Efficient FPGA Implementation. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10639. Springer, Cham. https://doi.org/10.1007/978-3-319-70136-3_87

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70136-3_87

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70135-6

  • Online ISBN: 978-3-319-70136-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics