Skip to main content

A Unified Framework for Assessing Energy Efficiency of Machine Learning

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022)

Abstract

State-of-the-art machine learning (ML) systems show exceptional qualitative performance, but can also have a negative impact on society. With regard to global climate change, the question of resource consumption and sustainability becomes more and more urgent. The enormous energy footprint of single ML applications and experiments was recently investigated. However, environment-aware users require a unified framework to assess, compare, and report the efficiency and performance trade-off of different methods and models. In this work we propose novel efficiency aggregation, indexing, and rating procedures for ML applications. To this end, we devise a set of metrics that allow for a holistic view, taking both task type, abstract model, software, and hardware into account. As a result, ML systems become comparable even across different execution environments. Inspired by the EU’s energy label system, we also introduce a concept for visually communicating efficiency information to the public in a comprehensible way. We apply our methods to over 20 SOTA models on a range of hardware architectures, giving an overview of the modern ML efficiency landscape.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    www.github.com/raphischer/imagenet-energy-efficiency.

  2. 2.

    www.pypi.org/project/pynvml/.

  3. 3.

    www.pypi.org/project/pyRAPL/.

  4. 4.

    www.intel.com/content/www/us/en/developer/articles/tool/power-gadget.html.

References

  1. Anthony, L.F.W., Kanding, B., Selvan, R.: Carbontracker: tracking and predicting the carbon footprint of training deep learning models. In: ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems (2020). arXiv:2007.03051

  2. Arnold, M., et al.: FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Develop. 63, 6:1-6:13 (2019)

    Article  Google Scholar 

  3. Bannink, T., et al.: Larq compute engine: Design, benchmark, and deploy state-of-the-art binarized neural networks (2020). https://arxiv.org/abs/2011.09398

  4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021). https://doi.org/10.1145/3442188.3445922

  5. Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., Bao, M.: The values encoded in machine learning research (2021). https://arxiv.org/abs/2106.15590

  6. Brundage, M., et al.: Toward trustworthy AI development: mechanisms for supporting verifiable claims (2020). https://arxiv.org/abs/2004.07213

  7. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. (JAIR) 70, 245–317 (2021). https://doi.org/10.1613/jair.1.12228

    Article  MATH  Google Scholar 

  8. Buschjäger, S., Pfahler, L., Buss, J., Morik, K., Rhode, W.: On-site Gamma-Hadron separation with deep learning on FPGAs. In: Dong, Y., Mladenić, D., Saunders, C. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12460, pp. 478–493. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67667-4_29

    Chapter  Google Scholar 

  9. Chatila, R., et al.: Trustworthy AI, pp. 13–39 (2021). https://doi.org/10.1007/978-3-030-69128-8_2

  10. Cremers, A., et al.: Trustworthy use of artificial intelligence - priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence (2019)

    Google Scholar 

  11. Elsayed, N., Maida, A.S., Bayoumi, M.: A review of quantum computer energy efficiency. In: Green Technologies Conference, pp. 1–3 (2019)

    Google Scholar 

  12. EU Ai HLEG: Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment (2020). https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence

  13. European Commission: Commission delegated regulation (eu) 2019/2014 with regard to energy labelling of household washing machines and household washer-dryers (2019). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32019R2014

  14. García-Martín, E., Rodrigues, C.F., Riley, G., Grahn, H.: Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 134, 75–88 (2019)

    Article  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015). http://arxiv.org/abs/1512.03385

  16. Henderson, P., et al.: Towards the systematic reporting of the energy and carbon footprints of machine learning (2020). https://arxiv.org/abs/2002.05651

  17. Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and perturbations (2019). http://arxiv.org/abs/1903.12261

  18. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017). http://arxiv.org/abs/1704.04861

  19. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020). https://doi.org/10.1016/j.cosrev.2020.100270

    Article  MATH  Google Scholar 

  20. Kadowaki, T., Nishimori, H.: Quantum annealing in the transverse Ising model. Phys. Rev. E 58(5), 5355 (1998)

    Article  Google Scholar 

  21. Kourfali, A., Stroobandt, D.: In-circuit debugging with dynamic reconfiguration of FPGA interconnects. Trans. Reconfigurable Technol. Syst. 13(1), 1–29 (2020)

    Article  Google Scholar 

  22. Mitchell, M., et al.: Model cards for model reporting. In: Conference on Fairness, Accountability, and Transparency, pp. 220–229 (2019). https://dl.acm.org/doi/abs/10.1145/3287560.3287596

  23. Morik, K., et al.: Yes we care! - certification for machine learning methods through the care label framework (2021). https://arxiv.org/abs/2105.10197

  24. Mücke, S., Piatkowski, N., Morik, K.: Hardware acceleration of machine learning beyond linear algebra. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1167, pp. 342–347. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43823-4_29

    Chapter  Google Scholar 

  25. Patterson, D., et al.: The carbon footprint of machine learning training will plateau, then shrink (2022). https://arxiv.org/abs/2204.05149

  26. Patterson, D.A., et al.: Carbon emissions and large neural network training (2021). https://arxiv.org/abs/2104.10350

  27. Rauber, J., Brendel, W., Bethge, M.: Foolbox: A Python toolbox to benchmark the robustness of machine learning models (2017). https://arxiv.org/abs/1707.04131

  28. Schmidt, V., et al.: CodeCarbon: estimate and track carbon emissions from machine learning computing (2021). https://github.com/mlco2/codecarbon

  29. Schwartz, R., Dodge, J., Smith, N.A., Etzioni, O.: Green AI. Commun. ACM 63(12), 54–63 (2020). https://doi.org/10.1145/3381831

  30. Seifert, C., Scherzinger, S., Wiese, L.: Towards generating consumer labels for machine learning models. In: International Conference on Cognitive Machine Intelligence, pp. 173–179 (2019). https://doi.org/10.1109/CogMI48466.2019.00033

  31. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP (2019). http://arxiv.org/abs/1906.02243

  32. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for modern deep learning research. In: AAAI Conference on Artificial Intelligence, pp. 13693–13696 (2020)

    Google Scholar 

  33. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: 36th International Conference on Machine Learning, pp. 6105–6114 (2019). https://proceedings.mlr.press/v97/tan19a.html

  34. Vanschoren, J., Van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. Newsl. 15(2), 49–60 (2014)

    Article  Google Scholar 

  35. Warden, P., Situnayake, D.: Tiny ML: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers. O’Reilly Media, Sebastopol (2019)

    Google Scholar 

  36. Zhuang, D., Zhang, X., Song, S.L., Hooker, S.: Randomness in neural network training: characterizing the impact of tooling (2021). https://arxiv.org/abs/2106.11872

Download references

Acknowledgement

This research has been funded by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence, LAMARR22B.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raphael Fischer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fischer, R., Jakobs, M., Mücke, S., Morik, K. (2023). A Unified Framework for Assessing Energy Efficiency of Machine Learning. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. Springer, Cham. https://doi.org/10.1007/978-3-031-23618-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23618-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23617-4

  • Online ISBN: 978-3-031-23618-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics