Skip to main content

Requirements for an Enterprise AI Benchmark

  • Conference paper
  • First Online:
Book cover Performance Evaluation and Benchmarking for the Era of Artificial Intelligence (TPCTC 2018)

Abstract

Artificial Intelligence (AI) is now the center of attention for many industries, ranging from private companies to academic institutions. While domains of interest and AI applications vary, one concern remains unchanged for everyone: How to determine if an end-to-end AI solution is performant? As AI is spreading to more industries, what metrics might be the reference for AI applications and benchmarks in the enterprise space? This paper intends to answer some of these questions. At present, the AI benchmarks either focus on evaluating deep learning approaches or infrastructure capabilities. Unfortunately, these approaches don’t capture end-to-end performance behavior of enterprise AI workloads. It is also clear that there is not one reference metric that will be suitable for all AI applications nor all existing platforms. We will first present the state of the art regarding the current basic and most popular AI benchmarks. We will then present the main characteristics of AI workloads from various industrial domains. Finally, we will focus on the needs for ongoing and future industry AI benchmarks and conclude on the gaps to improve AI benchmarks for enterprise workloads.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. The Stream benchmark. https://www.cs.virginia.edu/stream

  2. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI 2016, pp. 265–283. USENIX Association, Berkeley (2016). http://dl.acm.org/citation.cfm?id=3026877.3026899

  3. DKP et al.: OSU micro-benchmarks. http://mvapich.cse.ohio-state.edu/benchmarks

  4. Coleman, C.A., et al.: DAWNBench: an end-to-end deep learning benchmark and competition. In: Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017) (2017)

    Google Scholar 

  5. Dongarra, J.J., Heroux, M.A., Luszczek, P.: HPCG benchmark: a new metric for ranking high performance computing systems. Technical report UT-EECS-15-736, November 2015

    Google Scholar 

  6. Heroux, M.A., Dongarra, J.J., Luszczek, P.: HPCG technical specification. Technical report SAND2013-8752, October 2013

    Google Scholar 

  7. Luszczek, P., et al.: S12 – the HPC challenge (HPCC) benchmark suite. In: Proceedings of SC 2006, November 2006

    Google Scholar 

  8. McCalpin, J.D.: Memory bandwidth and machine balance in current high performance computers. In: IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, December 1995

    Google Scholar 

  9. Petitet, A., Whaley, R.C., Dongarra, J.J., Cleary, A.: HPL - a portable implementation of the high-performance Linpack benchmark for distributed-memory computers. http://www.netlib.org/benchmark/hpl/

  10. Bench Research: Deep Bench (2018). https://github.com/baidu-research/DeepBench

  11. Bench Research: ML Perf (2018). https://mlperf.org/

  12. SPEC. https://www.spec.org/cpu2017/Docs/overview.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rajesh Bordawekar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bourrasset, C. et al. (2019). Requirements for an Enterprise AI Benchmark. In: Nambiar, R., Poess, M. (eds) Performance Evaluation and Benchmarking for the Era of Artificial Intelligence. TPCTC 2018. Lecture Notes in Computer Science(), vol 11135. Springer, Cham. https://doi.org/10.1007/978-3-030-11404-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-11404-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-11403-9

  • Online ISBN: 978-3-030-11404-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics