Abstract
For emerging datacenter (in short, DC) workloads, such as online Internet services or offline data analytics, how to evaluate the upper bound performance and provide apple-to-apple comparisons are fundamental problems. To this end, an unified computation-centric metric is an essential requirement. FLOPS (FLoating-point Operations Per Second) as the most important computation-centric performance metric, has guided computing systems evolutions for many years. However, our observations demonstrate that the average FLOPS efficiency of the DC workloads is only 0.1%, which implies that FLOPS is inappropriate for DC computing. To address the above issue, inspired by FLOPS, we propose BOPS (Basic Operations Per Second), which is the average number of BOPs (Basic OPerations) completed per second, as a new computation-centered metric. We conduct the comprehensive analysis on the characteristics of seventeen typical DC workloads and extract the minimum representative computation operations set, which is composed of integer and floating point computation operations of arithmetic, comparing and array addressing. Then, we propose the formalized BOPS definition and the BOPS based upper bound performance model. Finally, the BOPS measuring tool is also implemented. To validate the BOPS metric, we perform experiments with seventeen DC workloads on three typical Intel processors platforms. First, BOPS can reflect the performance gap of different computing systems, the bias between the peak BOPS performance (obtaining from micro-architecture) gap and the average DC workloads’ wall clock time gap is no more than 10%. Second, BOPS can not only perform the apple-to-apple comparison, but also reflect the upper bound performance of the system. For examples, we analyze the BOPS efficiency of the Redis (the online service) workload and the Sort (the offline analytics) workload. And using the BOPS measuring tool–Sort can achieve 32% BOPS efficiency on the experimental platform.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Date center growth. https://www.enterprisetech.com
Sort benchmark. http://sortbenchmark.org/
Sort program. Available in hadoop source distribution, src/examples/org/apache/hadoop/examples/sort
SPEC CPU. http://www.spec.org/cpu
Technical report. http://www.agner.org/optimize/microarchitecture.pdf
TPCC. http://www.tpc.org/tpcc
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: Operating Systems Design and Implementation, pp. 265–283 (2016)
Cardwell, N., Savage, S., Anderson, T.E.: Modeling TCP latency, vol. 3, pp. 1742–1751 (2000)
Carter, N.P., et al.: Runnemede: an architecture for ubiquitous high-performance computing, pp. 198–209 (2013)
Chen, Y., et al.: DaDianNao: a machine-learning supercomputer. In: International Symposium on Microarchitecture, pp. 609–622 (2014)
Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM symposium on Cloud computing, SoCC 2010, pp. 143–154. ACM, New York (2010). https://doi.org/10.1145/1807128.1807152
Dongarra, J., Luszczek, P., Petitet, A.: The linpack benchmark: past, present and future. Concurr. Comput.: Pract. Exp. 15(9), 803–820 (2003)
Harbaugh, S., Forakis, J.A.: Timing studies using a synthetic whetstone benchmark. ACM Sigada Ada Lett. 2, 23–34 (1984)
Hennessy, J.L., Patterson, D.A.: Computer Architecture: A Quantitative Approach. Elsevier, Amsterdam (2012)
Hennessy, J.L., Patterson, D.A.: A new golden age for computer architecture. Commun. ACM 62(2), 48–60 (2019)
Jain, R.: The Art of Computer Systems Performance Analysis, vol. 182. Wiley, Chichester (1991)
Josephson, W., Bongo, L.A., Li, K., Flynn, D.: DFS: a file system for virtualized flash storage. ACM Trans. Storage 6(3), 14 (2010)
Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit, pp. 1–12 (2017)
Kamil, S., Chan, C.P., Oliker, L., Shalf, J., Williams, S.: An auto-tuning framework for parallel multicore stencil computations, pp. 1–12 (2010)
Lee, J., Kim, C., Lin, K., Cheng, L., Govindaraju, R., Kim, J.: WSmeter: a performance evaluation methodology for Google’s production warehouse-scale computers. In: Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 549–563. ACM (2018)
Liu, S., et al.: Cambricon: an instruction set architecture for neural networks. In: International Symposium on Computer Architecture, pp. 393–405 (2016)
Nakajima, M., et al.: A 40GOPS 250mw massively parallel processor based on matrix architecture. In: IEEE International on Solid-State Circuits Conference, 2006. ISSCC 2006. Digest of Technical Papers, pp. 1616–1625. IEEE (2006)
Pesovic, U., Jovanovic, Z., Randjic, S., Markovic, D.: Benchmarking performance and energy efficiency of microprocessors for wireless sensor network applications. In: MIPRO, 2012 Proceedings of the 35th International Convention, pp. 743–747. IEEE (2012)
Weicker, R.: Dhrystone: a synthetic systems programming benchmark. Commun. ACM 27(10), 1013–1030 (1984)
Williams, S., et al.: Optimization of geometric multigrid for emerging multi- and manycore processors, p. 96 (2012)
Williams, S., Waterman, A., Patterson, D.: Roofline: an insightful visual performance model for multicore architectures. Commun. ACM 52(4), 65–76 (2009)
Xiong, X., et al.: DCMIX: generating mixed workloads for the cloud data center. In: Zheng, C., Zhan, J. (eds.) Bench 2018. LNCS, vol. 11459, pp. 105–117. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32813-9_10
Acknowledgment
This work is supported by the National Key Research and Development Plan of China Grant No. 2016YFB1000201.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, L., Gao, W., Yang, K., Jiang, Z. (2020). BOPS, A New Computation-Centric Metric for Datacenter Computing. In: Gao, W., Zhan, J., Fox, G., Lu, X., Stanzione, D. (eds) Benchmarking, Measuring, and Optimizing. Bench 2019. Lecture Notes in Computer Science(), vol 12093. Springer, Cham. https://doi.org/10.1007/978-3-030-49556-5_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-49556-5_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49555-8
Online ISBN: 978-3-030-49556-5
eBook Packages: Computer ScienceComputer Science (R0)