Skip to main content

TrainBF: High-Performance DNN Training Engine Using BFloat16 on AI Accelerators

  • Conference paper
  • First Online:
Euro-Par 2023: Parallel Processing (Euro-Par 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14100))

Included in the following conference series:

  • 1481 Accesses

Abstract

Training deep neural networks (DNNs) with half-precision floating-point formats is widely supported on recent hardware and frameworks. However, current training approaches using half-precision formats neither obtain the optimal throughput due to the involvement of single-precision format nor achieve state-of-the-art model accuracy due to lower numerical digits. In this work, we present a new DNN training engine, named TrainBF, which leverages a typical half-precision format BFloat16 to maximize training throughput while ensuring sufficient model accuracy. TrainBF deploys BFloat16 across the entire training process for best throughput and improves model accuracy by introducing three proposed normalization techniques. TrainBF is also lightweight by only applying these normalization techniques to the layers that are most critical to model accuracy. Furthermore, TrainBF implements a parallel strategy that parallelizes the execution of operators in DNN training to make use of the spare memory space saved by half-precision for better throughput. Evaluating with six common DNN models and compared with the state-of-the-art mixed-precision approach, TrainBF achieves competitive model accuracy with an average throughput speedup of 1.21\(\times \), 1.74\(\times \), and 1.16\(\times \) on NVIDIA A100 GPU, AMD MI100 GPU, and an emerging AI accelerator SambaNova, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Blinn, J.F.: Floating-point tricks. IEEE Comput. Graphics Appl. 17(4), 80–84 (1997)

    Article  Google Scholar 

  2. Burgess, N., Milanovic, J., Stephens, N., Monachopoulos, K., Mansell, D.: BFloat16 processing for neural networks. In: 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pp. 88–91. IEEE (2019)

    Google Scholar 

  3. Choquette, J., Gandhi, W., Giroux, O., Stam, N., Krashinsky, R.: NVIDIA A100 tensor core GPU: performance and innovation. IEEE Micro 41(2), 29–35 (2021)

    Article  Google Scholar 

  4. contributors, W.: BFloat16 floating-point format (2021). https://en.wikipedia.org/wiki/Bfloat16_floating-point_format

  5. Das, D., et al.: Mixed precision training of convolutional neural networks using integer operations. arXiv preprint arXiv:1802.00930 (2018)

  6. Emani, M., et al.: A comprehensive evaluation of novel AI accelerators for deep learning workloads. In: 2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), pp. 13–25. IEEE (2022)

    Google Scholar 

  7. Franchi, G., Bursuc, A., Aldea, E., Dubuisson, S., Bloch, I.: TRADI: tracking deep neural network weight distributions. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 105–121. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58520-4_7

    Chapter  Google Scholar 

  8. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: International Conference on Machine Learning, pp. 1737–1746. PMLR (2015)

    Google Scholar 

  9. He, X., Chen, Z., Sun, J., Chen, H., Li, D., Quan, Z.: Exploring synchronization in cache coherent manycore systems: a case study with xeon phi. In: 2017 IEEE 23rd International Conference on Parallel and Distributed Systems (ICPADS), pp. 232–239. IEEE (2017)

    Google Scholar 

  10. He, X., et al.: Enabling energy-efficient DNN training on hybrid GPU-FPGA accelerators. In: Proceedings of the ACM International Conference on Supercomputing, pp. 227–241 (2021)

    Google Scholar 

  11. He, X., Sun, J., Chen, H., Li, D.: Campo: \(\{\)Cost-Aware\(\}\) performance optimization for \(\{\)Mixed-Precision\(\}\) neural network training. In: 2022 USENIX Annual Technical Conference (USENIX ATC 22), pp. 505–518 (2022)

    Google Scholar 

  12. He, X., Yao, Y., Chen, Z., Sun, J., Chen, H.: Efficient parallel A* search on multi-GPU system. Futur. Gener. Comput. Syst. 123, 35–47 (2021)

    Article  Google Scholar 

  13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  14. Jia, X., et al.: Highly scalable deep learning training system with mixed-precision: training ImageNet in four minutes. arXiv preprint arXiv:1807.11205 (2018)

  15. Johnson, J.: Rethinking floating point for deep learning. arXiv preprint arXiv:1811.01721 (2018)

  16. Johnston, J.T., et al.: Fine-grained exploitation of mixed precision for faster CNN training. In: 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), pp. 9–18. IEEE (2019)

    Google Scholar 

  17. Kuchaiev, O., Ginsburg, B., Gitman, I., Lavrukhin, V., Case, C., Micikevicius, P.: OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models. In: Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pp. 41–46 (2018)

    Google Scholar 

  18. Kuchaiev, O., et al.: Mixed-precision training for NLP and speech recognition with openseq2seq. arXiv preprint arXiv:1805.10387 (2018)

  19. Mattson, P., et al.: MLPerf training benchmark. Proc. Mach. Learn. Syst. 2, 336–349 (2020)

    Google Scholar 

  20. Mellempudi, N., Srinivasan, S., Das, D., Kaul, B.: Mixed precision training with 8-bit floating point. arXiv preprint arXiv:1905.12334 (2019)

  21. Micikevicius, P., et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017)

  22. Mishra, A., Nurvitadhi, E., Cook, J.J., Marr, D.: WRPN: wide reduced-precision networks. arXiv preprint arXiv:1709.01134 (2017)

  23. PyTorch: Automatic Mixed Precision package (2022). https://pytorch.org/docs/stable/amp.html. Accessed 1 Aug 2022

  24. Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In: Fifteenth Annual Conference of the International Speech Communication Association. Citeseer (2014)

    Google Scholar 

  25. Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

    Article  Google Scholar 

  26. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)

  27. Wu, Y., He, K.: Group normalization. In: Proceedings of the European conference on computer vision (ECCV), pp. 3–19 (2018)

    Google Scholar 

  28. Xie, Z., Dong, W., Liu, J., Liu, H., Li, D.: Tahoe: tree structure-aware high performance inference engine for decision tree ensemble on GPU. In: Proceedings of the Sixteenth European Conference on Computer Systems, pp. 426–440 (2021)

    Google Scholar 

  29. Xie, Z., Dong, W., Liu, J., Peng, I., Ma, Y., Li, D.: MD-HM: memoization-based molecular dynamics simulations on big memory system. In: Proceedings of the ACM International Conference on Supercomputing, pp. 215–226 (2021)

    Google Scholar 

  30. Xie, Z., Liu, J., Li, J., Li, D.: Merchandiser: data placement on heterogeneous memory for task-parallel HPC applications with load-balance awareness (2023)

    Google Scholar 

  31. Xie, Z., Tan, G., Liu, W., Sun, N.: IA-SpGEMM: an input-aware auto-tuning framework for parallel sparse matrix-matrix multiplication. In: Proceedings of the ACM International Conference on Supercomputing, pp. 94–105 (2019)

    Google Scholar 

  32. Zamirai, P., Zhang, J., Aberger, C.R., De Sa, C.: Revisiting BFloat16 training. arXiv preprint arXiv:2010.06192 (2020)

  33. Zhu, H., Zhou, M., Alkins, R.: Group role assignment via a Kuhn-Munkres algorithm-based solution. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 42(3), 739–750 (2011)

    Article  Google Scholar 

  34. Zvyagin, M., et al.: GenSLMs: genome-scale language models reveal SARS-CoV-2 evolutionary dynamics. bioRxiv, p. 2022–10 (2022)

    Google Scholar 

Download references

Acknowledgment

This research was funded in part by and used resources at the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen Xie .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 UChicago Argonne, LLC, Operator of Argonne National Laboratory

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, Z., Raskar, S., Emani, M., Vishwanath, V. (2023). TrainBF: High-Performance DNN Training Engine Using BFloat16 on AI Accelerators. In: Cano, J., Dikaiakos, M.D., Papadopoulos, G.A., Pericàs, M., Sakellariou, R. (eds) Euro-Par 2023: Parallel Processing. Euro-Par 2023. Lecture Notes in Computer Science, vol 14100. Springer, Cham. https://doi.org/10.1007/978-3-031-39698-4_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-39698-4_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-39697-7

  • Online ISBN: 978-3-031-39698-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics