Skip to main content

Machine Learning for Generic Energy Models of High Performance Computing Resources

  • Conference paper
  • First Online:
  • 1510 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12761))

Abstract

This article presents an study of the generalization capabilities of forecasting techniques of empirical energy consumption models of high performance computing resources. This is a relevant subject, considering the large energy utilization of modern supercomputing facilities. Different energy models are built, considering several forecasting techniques and using information from the execution of a benchmark over different hardware. A cross-evaluation is performed and the training information of each model is gradually extended with information about other hardware. Each model is analyzed to evaluate how new information impacts on the prediction capabilities. The main results indicate that neural network approaches achieve the highest quality results when the training data of the models is expanded with minimal information from new scenarios.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. DEEP-Extreme Scale Technologies project. https://www.deep-projects.eu/. Accessed 08 Jul 2021

  2. Auweter, A., et al.: A case study of energy aware scheduling on SuperMUC. In: Supercomputing, pp. 394–409 (2014)

    Google Scholar 

  3. Barreda, M., Dolz, M., Castaño, M.: Convolutional neural nets for estimating the run time and energy consumption of the sparse matrix-vector product. Int. J. High Perform. Comput. Appl. 35(3), 268–281 (2021)

    Article  Google Scholar 

  4. Buitinck, L., et al.: API design for machine learning software: experiences from the scikit-learn project. In: Workshop: Languages for Data Mining and Machine Learning, pp. 108–122 (2013)

    Google Scholar 

  5. Dayarathna, M., Wen, Y., Fan, R.: Data center energy consumption modeling: a survey. IEEE Commun. Surv. Tutorials 18(1), 732–794 (2016)

    Article  Google Scholar 

  6. Hähnel, M., Döbel, B., Völp, M., Härtig, H.: Measuring energy consumption for short code paths using RAPL. ACM SIGMETRICS Perform. Eval. Rev. 40(3), 13–17 (2012)

    Article  Google Scholar 

  7. Imes, C., Hofmeyr, S., Hoffmann, H.: Energy-efficient application resource scheduling using machine learning classifiers. In: 47th International Conference on Parallel Processing (2018)

    Google Scholar 

  8. Malmodin, J., Lundén, D.: The energy and carbon footprint of the global ICT and E&M sectors 2010–2015. Sustainability 10(9), 3027 (2018)

    Article  Google Scholar 

  9. Muraña, J., Nesmachnow, S., Armenta, F., Tchernykh, A.: Characterization, modeling and scheduling of power consumption of scientific computing applications in multicores. Cluster Comput. 22(3), 839–859 (2019). https://doi.org/10.1007/s10586-018-2882-8

    Article  Google Scholar 

  10. Nesmachnow, S., et al.: Demand response and ancillary services for supercomputing and datacenters. In: Torres, M., Klapp, J. (eds.) ISUM 2019. CCIS, vol. 1151, pp. 203–217. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-38043-4_17

    Chapter  Google Scholar 

  11. Nesmachnow, S., Perfumo, C., Goiri, Í.: Holistic multiobjective planning of datacenters powered by renewable energy. Cluster Comput. 18(4), 1379–1397 (2015). https://doi.org/10.1007/s10586-015-0485-1

    Article  Google Scholar 

  12. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  13. Rong, H., Zhang, H., Xiao, S., Li, C., Hu, C.: Optimizing energy consumption for data centers. Renew. Sustain. Energy Rev. 58, 674–691 (2016)

    Article  Google Scholar 

  14. Sayadi, H., Patel, N., Sasan, A., Homayoun, H.: Machine learning-based approaches for energy-efficiency prediction and scheduling in composite cores architectures. In: International Conference on Computer Design, pp. 129–136 (2017)

    Google Scholar 

  15. Schöne, R., et al.: Automatic application tuning for HPC architectures. Sci. Program. 22, 1–11 (2014)

    Google Scholar 

  16. Strohmaier, E., Shan, H.: Apex-Map: a global data access benchmark to analyze hpc systems and parallel programming paradigms. In: 2005 ACM/IEEE Conference on Supercomputing, pp. 49–49 (2005)

    Google Scholar 

  17. Terpstra, D., Jagode, H., You, H., Dongarra, J.: Collecting performance data with PAPI-C. In: Tools for High Performance Computing, pp. 157–173 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonathan Muraña .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Muraña, J., Navarrete, C., Nesmachnow, S. (2021). Machine Learning for Generic Energy Models of High Performance Computing Resources. In: Jagode, H., Anzt, H., Ltaief, H., Luszczek, P. (eds) High Performance Computing. ISC High Performance 2021. Lecture Notes in Computer Science(), vol 12761. Springer, Cham. https://doi.org/10.1007/978-3-030-90539-2_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90539-2_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90538-5

  • Online ISBN: 978-3-030-90539-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics