Skip to main content

DAEBI: A Tool for Data Flow and Architecture Explorations of Binary Neural Network Accelerators

  • Conference paper
  • First Online:
Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS 2023)

Abstract

Binary Neural Networks (BNNs) are an efficient alternative to traditional neural networks as they use binary weights and activations, leading to significant reductions in memory footprint and computational energy. However, the design of efficient BNN accelerators is a challenge due to the large design space. Multiple factors have to be considered during the design, among them are the type of data flow and the organization of the accelerator architecture. To the best of our knowledge, a tool for the design space exploration of BNN accelerators with regards to these factors does not exist.

In this work, we propose DAEBI, a tool for the design space exploration of BNN accelerators, which enables designers to identify the most suitable data flow and accelerator architecture. DAEBI automatically generates VHDL-code for BNN accelerator designs based on user specifications, making it convenient to explore large design spaces. Using DAEBI, we conduct a design space exploration of BNN accelerators for traditional CMOS technology using an FPGA. Our results demonstrate the capabilities of DAEBI and provide insights into the most suitable design choices. Additionally, based on a decision model, we provide insights for the design of BNN accelerator specifications that use emerging beyond-CMOS technologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ando, K., et al.: BRein memory: a single-chip binary/ternary reconfigurable in-memory deep neural network accelerator achieving 1.4 TOPS at 0.6 W. IEEE J. Solid-State Circ. (JSSC) 53(4), 983–994 (2017)

    Article  Google Scholar 

  2. Andri, R., Cavigelli, L., Rossi, D., Benini, L.: YodaNN: an ultra-low power convolutional neural network accelerator based on binary weights. In: 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 236–241 (2016)

    Google Scholar 

  3. Bertuletti, M., noz Martín, I.M., Bianchi, S., Bonfanti, A.G., Ielmini, D.: A multilayer neural accelerator with binary activations based on phase-change memory. IEEE Trans. Electron Devices 70(3), 986–992 (2023)

    Google Scholar 

  4. Blott, M., et al.: FINN-R: an end-to-end deep-learning framework for fast exploration of quantized neural networks. ACM Trans. Reconfigurable Technol. Syst. (TRETS) 11(3), 1–23 (2018)

    Article  Google Scholar 

  5. Boukhobza, J., Rubini, S., Chen, R., Shao, Z.: Emerging NVM: a survey on architectural integration and research. Challenges 23(2), 1084–4309 (2017)

    Google Scholar 

  6. Chang, L., et al.: PXNOR-BNN: in/with spin-orbit torque MRAM preset-XNOR operation-based binary neural networks. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(11), 2668–2679 (2019)

    Google Scholar 

  7. Chen, X., Yin, X., Niemier, M., Hu, X.S.: Design and optimization of FeFET-based crossbars for binary convolution neural networks. In: 2018 Design, Automation, Test in Europe (DATE), pp. 1205–1210 (2018)

    Google Scholar 

  8. Chen, Y.H., Emer, J., Sze, V.: Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 367–379 (2016)

    Google Scholar 

  9. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or \(-\)1. arXiv preprint arXiv:1602.02830 (2016)

  10. Dave, A., Frustaci, F., Spagnolo, F., Yayla, M., Chen, J.J., Amrouch, H.: HW/SW codesign for approximation-aware binary neural networks. IEEE J. Emerg. Sel. Top. Circ. Syst. 13(1), 33–47 (2023)

    Article  Google Scholar 

  11. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  12. George, S., et al.: Nonvolatile memory design based on ferroelectric FETs. In: 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), pp. 1–6 (2016)

    Google Scholar 

  13. Goldberger, A.L., et al.: PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23), e215–e220 (2000)

    Article  Google Scholar 

  14. Hirtzlin, T., et al.: Outstanding bit error tolerance of resistive RAM-based binarized neural networks. In: 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), pp. 288–292 (2019)

    Google Scholar 

  15. Horowitz, M.: 1.1 computing’s energy problem (and what we can do about it). In: 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10–14. IEEE (2014)

    Google Scholar 

  16. Ko, D.H., Oh, T.W., Lim, S., Kim, S.K., Jung, S.O.: Comparative analysis and energy-efficient write scheme of ferroelectric FET-based memory cells. IEEE Access 9, 127895–127905 (2021)

    Article  Google Scholar 

  17. Latotzke, C., Gemmeke, T.: Efficiency versus accuracy: a review of design techniques for DNN hardware accelerators. IEEE Access 9, 9785–9799 (2021)

    Article  Google Scholar 

  18. Li, G., Zhang, M., Zhang, Q., Lin, Z.: Efficient binary 3D convolutional neural network and hardware accelerator. J. Real-Time Image Process. 19(1), 61–71 (2022)

    Article  Google Scholar 

  19. Li, Y., Chen, Y., Jones, A.K.: A software approach for combating asymmetries of non-volatile memories. In: Proceedings of the 2012 ACM/IEEE International Symposium on Low Power Electronics and Design, ISLPED 2012, pp. 191–196 (2012)

    Google Scholar 

  20. Ni, K., Li, X., Smith, J.A., Jerry, M., Datta, S.: Write disturb in ferroelectric FETs and its implication for 1T-FeFET AND memory arrays. IEEE Electron Device Lett. 39(11), 1656–1659 (2018)

    Article  Google Scholar 

  21. Nurvitadhi, E., Sheffield, D., Sim, J., Mishra, A., Venkatesh, G., Marr, D.: Accelerating binarized neural networks: comparison of FPGA, CPU, GPU, and ASIC. In: 2016 International Conference on Field-Programmable Technology (FPT), pp. 77–84 (2016)

    Google Scholar 

  22. Resch, S., et al.: PIMBALL: binary neural networks in spintronic memory. ACM Trans. Archit. Code Optim. (TACO) 16(4), 1–26 (2019)

    Article  Google Scholar 

  23. Sari, E., Belbahri, M., Nia, V.P.: How does batch normalization help binary training? arXiv:1909.09139 (2019)

  24. Soliman, T., et al.: Efficient FeFET crossbar accelerator for binary neural networks. In: 2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP), pp. 109–112 (2020)

    Google Scholar 

  25. Stadtmann, T., Latotzke, C., Gemmeke, T.: From quantitative analysis to synthesis of efficient binary neural networks. In: 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 93–100. IEEE (2020)

    Google Scholar 

  26. Sun, X., et al.: Fully parallel RRAM synaptic array for implementing binary neural network with (+1, \(-\)1) weights and (+1, 0) neurons. In: 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 574–579 (2018)

    Google Scholar 

  27. Sunny, F.P., Mirza, A., Nikdast, M., Pasricha, S.: Robin: a robust optical binary neural network accelerator. ACM Trans. Embed. Comput. Syst. (TECS) 20(5), 1–24 (2021)

    Google Scholar 

  28. Suresh, A., Cicotti, P., Carrington, L.: Evaluation of emerging memory technologies for HPC, data intensive applications. In: 2014 IEEE International Conference on Cluster Computing (CLUSTER), pp. 239–247 (2014)

    Google Scholar 

  29. Tu, Z., Chen, X., Ren, P., Wang, Y.: AdaBin: improving binary neural networks with adaptive binary sets. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13671, pp. 379–395. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20083-0_23

    Chapter  Google Scholar 

  30. Wu, Q., et al.: A Non-volatile computing-in-memory ReRAM macro using two-bit current-mode sensing amplifier. In: 2021 IEEE 10th Non-volatile Memory Systems and Applications Symposium (NVMSA), pp. 1–6 (2021)

    Google Scholar 

  31. Yayla, M., Chen, J.J.: Memory-efficient training of binarized neural networks on the edge. In: Proceedings of the 59th ACM/IEEE Design Automation Conference (DAC) (2022)

    Google Scholar 

  32. Yayla, M., et al.: Reliable binarized neural networks on unreliable beyond Von-Neumann architecture. IEEE Trans. Circuits Syst. I Regul. Pap. 69(6), 2516–2528 (2022)

    Article  Google Scholar 

  33. Zangeneh, M., Joshi, A.: Performance and energy models for memristor-based 1T1R RRAM cell. In: Proceedings of the Great Lakes Symposium on VLSI (GLSVLSI 2012), pp. 9–14 (2012)

    Google Scholar 

  34. Zhang, Y., Chen, G., He, T., Huang, Q., Huang, K.: ViraEye: an energy-efficient stereo vision accelerator with binary neural network in 55 nm CMOS. In: Proceedings of the 28th Asia and South Pacific Design Automation Conference, pp. 178–179 (2023)

    Google Scholar 

Download references

Acknowledgements

This paper has been supported by Deutsche Forschungsgemeinschaft (DFG) project OneMemory (405422836), by the Collaborative Research Center SFB 876 “Providing Information by Resource-Constrained Analysis” (project number 124020371), subproject A1 (http://sfb876.tu-dortmund.de) and by the Federal Ministry of Education and Research of Germany and the state of NRW as part of the Lamarr-Institute for ML and AI, LAMARR22B.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikail Yayla .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yayla, M., Latotzke, C., Huber, R., Iskif, S., Gemmeke, T., Chen, JJ. (2023). DAEBI: A Tool for Data Flow and Architecture Explorations of Binary Neural Network Accelerators. In: Silvano, C., Pilato, C., Reichenbach, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2023. Lecture Notes in Computer Science, vol 14385. Springer, Cham. https://doi.org/10.1007/978-3-031-46077-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46077-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46076-0

  • Online ISBN: 978-3-031-46077-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics