Skip to main content

Performance and Energy Aware Training of a Deep Neural Network in a Multi-GPU Environment with Power Capping

  • Conference paper
  • First Online:
Euro-Par 2023: Parallel Processing Workshops (Euro-Par 2023)

Abstract

In this paper we demonstrate that it is possible to obtain considerable improvement of performance and energy aware metrics for training of deep neural networks using a modern parallel multi-GPU system, by enforcing selected, non-default power caps on the GPUs. We measure the power and energy consumption of the whole node using a professional, certified hardware power meter. For a high performance workstation with 8 GPUs, we were able to find non-default GPU power cap settings within the range of 160–200 W to improve the difference between percentage energy gain and performance loss by over 15.0%, EDP (Abbreviations and terms used are described in main text.) by over 17.3%, EDS with k = 1.5 by over 2.2%, EDS with k = 2.0 by over 7.5% and pure energy by over 25%, compared to the default power cap setting of 260 W per GPU. These findings demonstrate the potential of today’s CPU+GPU systems for configuration improvement in the context of performance-energy consumption metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Power capping is a mechanism allowing limiting the power draw of a computing device such as a CPU or a GPU, available through Intel RAPL for Intel CPUs and NVIDIA NVML for NVIDIA GPUs, resulting in potentially lower performance but potential for optimization of energy consumption, even throughout extended application execution time [9,10,11].

  2. 2.

    https://energyestimation.mit.edu/.

  3. 3.

    Due to space constraints this data is available at https://cdn.files.pg.edu.pl/eti/KASK/RAW2023-paper-supplementary-data/Supplementary_data_Performance_and_power_analysis_of_training_and_performance_quality.pdf.

References

  1. Chen, G., Wang, X.: Performance optimization of machine learning inference under latency and server power constraints. In: 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pp. 325–335 (2022). https://doi.org/10.1109/ICDCS54860.2022.00039

  2. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, July 2017. https://doi.org/10.1109/CVPR.2017.195

  3. Czarnul, P., Proficz, J., Drypczewski, K.: Survey of methodologies, approaches, and challenges in parallel programming using high-performance computing systems. Sci. Program. 2020, 4176794:1–4176794:19 (2020). https://doi.org/10.1155/2020/4176794

  4. García-Martín, E., Rodrigues, C.F., Riley, G., Grahn, H.: Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 134, 75–88 (2019). https://doi.org/10.1016/j.jpdc.2019.07.007, https://www.sciencedirect.com/science/article/pii/S0743731518308773

  5. He, X., et al.: Enabling energy-efficient DNN training on hybrid GPU-FPGA accelerators. In: Proceedings of the ACM International Conference on Supercomputing, ICS 2021, pp. 227–241. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3447818.3460371

  6. Jabłońska, K., Czarnul, P.: Benchmarking deep neural network training using multi- and many-core processors. In: Saeed, K., Dvorský, J. (eds.) CISIM 2020. LNCS, vol. 12133, pp. 230–242. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47679-3_20

    Chapter  Google Scholar 

  7. Kang, D.K., Lee, K.B., Kim, Y.C.: Cost efficient GPU cluster management for training and inference of deep learning. Energies 15(2), 474 (2022). https://doi.org/10.3390/en15020474, https://www.mdpi.com/1996-1073/15/2/474

  8. Kocot, B., Czarnul, P., Proficz, J.: Energy-aware scheduling for high-performance computing systems: a survey. Energies 16(2), 890 (2023). https://doi.org/10.3390/en16020890, https://www.mdpi.com/1996-1073/16/2/890

  9. Krzywaniak, A., Czarnul, P.: Performance/Energy aware optimization of parallel applications on GPUs under power capping. In: Wyrzykowski, R., Deelman, E., Dongarra, J., Karczewski, K. (eds.) PPAM 2019. LNCS, vol. 12044, pp. 123–133. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43222-5_11

    Chapter  Google Scholar 

  10. Krzywaniak, A., Czarnul, P., Proficz, J.: GPU power capping for energy-performance trade-offs in training of deep convolutional neural networks for image recognition. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds.) Computational Science - ICCS 2022, pp. 667–681. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-08751-6_48

    Chapter  Google Scholar 

  11. Krzywaniak, A., Czarnul, P., Proficz, J.: DEPO: a dynamic energy-performance optimizer tool for automatic power capping for energy efficient high-performance computing. Softw. Pract. Exp. 52(12), 2598–2634 (2022). https://doi.org/10.1002/spe.3139, https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.3139

  12. Lai, C., Ahmad, S., Dubinsky, D., Maver, C.: AI is harming our planet: addressing AI’s staggering energy cost, May 2022. https://www.numenta.com/blog/2022/05/24/ai-is-harming-our-planet/

  13. Leng, J., et al.: GPUWattch: enabling energy optimizations in GPGPUs. SIGARCH Comput. Archit. News 41(3), 487–498 (2013). https://doi.org/10.1145/2508148.2485964

  14. Mazuecos Pérez, M.D., Seiler, N.G., Bederián, C.S., Wolovick, N., Vega, A.J.: Power efficiency analysis of a deep learning workload on an IBM “Minsky’’ Platform. In: Meneses, E., Castro, H., Barrios Hernández, C.J., Ramos-Pollan, R. (eds.) CARLA 2018. CCIS, vol. 979, pp. 255–262. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16205-4_19

    Chapter  Google Scholar 

  15. McDonald, J., Li, B., Frey, N., Tiwari, D., Gadepally, V., Samsi, S.: Great power, great responsibility: recommendations for reducing energy for training language models. In: Findings of the Association for Computational Linguistics: NAACL 2022. Association for Computational Linguistics (2022). https://doi.org/10.18653/v1/2022.findings-naacl.151

  16. Rouhani, B.D., Mirhoseini, A., Koushanfar, F.: Delight: adding energy dimension to deep neural networks. In: Proceedings of the 2016 International Symposium on Low Power Electronics and Design, ISLPED 2016, pp. 112–117. Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2934583.2934599

  17. Schuchart, J., et al.: The READEX formalism for automatic tuning for energy efficiency. Computing 99(8), 727–745 (2017)

    Article  MathSciNet  Google Scholar 

  18. Tao, Y., Ma, R., Shyu, M.L., Chen, S.C.: Challenges in energy-efficient deep neural network training with FPGA. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1602–1611 (2020). https://doi.org/10.1109/CVPRW50498.2020.00208

  19. Wang, F., Zhang, W., Lai, S., Hao, M., Wang, Z.: Dynamic GPU energy optimization for machine learning training workloads. IEEE Trans. Parallel Distrib. Syst. 33(11), 2943–2954 (2022). https://doi.org/10.1109/TPDS.2021.3137867

    Article  Google Scholar 

  20. Xu, Y., Martínez-Fernández, S., Martinez, M., Franch, X.: Energy efficiency of training neural network architectures: an empirical study (2023)

    Google Scholar 

  21. Yang, H., Zhu, Y., Liu, J.: ECC: platform-independent energy-constrained deep neural network compression via a bilinear regression model. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11198–11207. IEEE Computer Society, Los Alamitos, CA, USA, June 2019. https://doi.org/10.1109/CVPR.2019.01146, https://doi.ieeecomputersociety.org/10.1109/CVPR.2019.01146

  22. Yang, T.J., Chen, Y.H., Emer, J., Sze, V.: A method to estimate the energy consumption of deep neural networks. In: 2017 51st Asilomar Conference on Signals, Systems, and Computers, pp. 1916–1920 (2017). https://doi.org/10.1109/ACSSC.2017.8335698

  23. Yang, Z., Meng, L., Chung, J.W., Chowdhury, M.: Chasing low-carbon electricity for practical and sustainable DNN training (2023)

    Google Scholar 

  24. You, J., Chung, J.W., Chowdhury, M.: Zeus: understanding and optimizing GPU energy consumption of DNN training. In: 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2023), pp. 119–139. USENIX Association, Boston, MA, April 2023. https://www.usenix.org/conference/nsdi23/presentation/you

  25. Zou, P., Li, A., Barker, K., Ge, R.: Indicator-directed dynamic power management for iterative workloads on GPU-accelerated systems. In: 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), pp. 559–568 (2020). https://doi.org/10.1109/CCGrid49817.2020.00-37

Download references

Acknowledgment

We would like to thank the administrator of the HPC server at Department of Computer Architecture at the GUT, dr Tomasz Boiński, for support regarding setting up the testbed environment. This work is supported by CERCIRAS COST Action CA19135 funded by the COST Association as well as statutory funds of Dept. of Computer Architecture, Faculty of Electronics, Telecommunications and Informatics, Gdańsk Tech.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paweł Czarnul .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Koszczał, G., Dobrosolski, J., Matuszek, M., Czarnul, P. (2024). Performance and Energy Aware Training of a Deep Neural Network in a Multi-GPU Environment with Power Capping. In: Zeinalipour, D., et al. Euro-Par 2023: Parallel Processing Workshops. Euro-Par 2023. Lecture Notes in Computer Science, vol 14352. Springer, Cham. https://doi.org/10.1007/978-3-031-48803-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48803-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48802-3

  • Online ISBN: 978-3-031-48803-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics