skip to main content
10.1145/3185768.3186286acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

Published: 02 April 2018 Publication History

Abstract

Micro and application performance benchmarks are commonly used to guide cloud service selection. However, they are often considered in isolation in a hardly reproducible setup with a flawed execution strategy. This paper presents a new execution methodology that combines micro and application benchmarks into a benchmark suite called RMIT Combined, integrates this suite into an automated cloud benchmarking environment, and implements a repeatable execution strategy. Additionally, a newly crafted Web serving benchmark called WPBench with three different load scenarios is contributed. A case study in the Amazon EC2 cloud demonstrates that choosing a cost-efficient instance type can deliver up to 40% better performance with 40% lower costs at the same time for the Web serving benchmark WPBench. Contrary to prior research, our findings reveal that network performance does not vary relevantly anymore. Our results also show that choosing a modern type of virtualization can improve disk utilization up to 10% for I/O-heavy workloads.

References

[1]
Ali Abedi and Tim Brecht. 2017. Conducting Repeatable Experiments in Highly Variable Cloud Computing Environments. In 8th ACM/SPEC International Conference on Performance Engineering (ICPE). 287--292.
[2]
Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, and Matei Zaharia. 2009. Above the Clouds: A Berkeley View of Cloud Computing. Technical Report. EECS Dep., Univ. of California, Berkeley. 25 pages.
[3]
Carsten Binnig, Donald Kossmann, Tim Kraska, and Simon Loesing. 2009. How is the Weather Tomorrow?: Towards a Benchmark for the Cloud. In Proceedings of the 2nd International Workshop on Testing Database Systems (DBTest). ETH Zurich, ACM, New York, NY, USA, Article 9, 6 pages.
[4]
A. H. Borhani, P. Leitner, B. S. Lee, X. Li, and T. Hung. 2014. WPress: An Application-Driven Performance Benchmark for Cloud-Based Virtual Machines. In 2014 IEEE 18th International Enterprise Distributed Object Computing Conference. 101--109.
[5]
The SPEC Consortium. 2016. SPEC Cloud? IaaS 2016 Benchmark. (2016). http://spec.org/cloud_iaas2016/
[6]
Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC). 143--154.
[7]
Matheus Cunha, Nabor Mendonça, and Américo Sampaio. 2013. A Declarative Environment for Automatic Performance Evaluation in IaaS Clouds. In 6th IEEE International Conference on Cloud Computing (CLOUD). 285--292.
[8]
Jiang Dejun, Guillaume Pierre, and Chi-Hung Chi. 2010. EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications. Springer Berlin Heidelberg, Berlin, Heidelberg, 197--207.
[9]
Benjamin Farley, Ari Juels, Venkatanathan Varadarajan, Thomas Ristenpart, Kevin D. Bowers, and Michael M. Swift. 2012. More for Your Money: Exploiting Performance Heterogeneity in Public Clouds. In Proceedings of the 3rd ACM Symposium on Cloud Computing (SoCC '12). Article 20, 14 pages.
[10]
Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, and Babak Falsafi. 2012. Clearing the clouds: a study of emerging scale-out workloads on modern hardware. In ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 37--48.
[11]
Enno Folkerts, Alexander Alexandrov, Kai Sachs, Alexandru Iosup, Volker Markl, and Cafer Tosun. 2013. Benchmarking in the Cloud: What It Should, Can, and Cannot Be. In Selected Topics in Performance Evaluation and Benchmarking. Vol. 7755. Springer, 173--188.
[12]
Brendan Gregg. 2013. Systems Performance: Enterprise and the Cloud. Prentice Hall.
[13]
Alexandru Iosup, Simon Ostermann, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2011. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. IEEE Transactions on Parallel and Distributed Systems 22, 6 (June 2011), 931--945.
[14]
Alexandru Iosup, Radu Prodan, and Dick Epema. 2014. IaaS Cloud Benchmarking: Approaches, Challenges, and Experience. Springer, 83--104.
[15]
Alexandru Iosup, Nezih Yigitbasi, and Dick Epema. 2011. On the Performance Variability of Production Cloud Services. In 11th IEEE/ACM Int. Symp. on CCGrid. 104--113.
[16]
Deepal Jayasinghe, Galen Swint, Simon Malkowski, Jack Li, Qingyang Wang, Junhee Park, and Calton Pu. 2012. Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds. In 5th IEEE Int. Conf. on Cloud Computing (CLOUD). 115--122.
[17]
Philipp Leitner and Jürgen Cito. 2016. Patterns in the Chaos A Study of Performance Variation and Predictability in Public IaaS Clouds. ACM Transactions on Internet Technology (TOIT) 16, 3, Article 15 (April 2016), 23 pages.
[18]
Philipp Leitner and Joel Scheuner. 2015. Bursting With Possibilities -- an Empirical Study of Credit-Based Bursting Cloud Instance Types. In 8th IEEE/ACM International Conferrence on Utility and Cloud Computing (UCC).
[19]
Simon Ostermann, Alexandria Iosup, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2009. A performance analysis of EC2 cloud computing services for scientific computing. In Cloud Computing. Vol. 34. Springer, 115--131.
[20]
Z. Ou, H. Zhuang, A. Lukyanenko, J. K. Nurminen, P. Hui, V. Mazalov, and A. Ylä-Jääski. 2013. Is the Same Instance Type Created Equal? Exploiting Heterogeneity of Public Clouds. IEEE Transactions on Cloud Computing 1, 2 (July 2013), 201--214.
[21]
Tapti Palit, Yongming Shen, and Michael Ferdman. 2016. Demystifying Cloud Benchmarking. In IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 122--132.
[22]
Jörg Schad, Jens Dittrich, and Jorge-Arnulfo Quiané-Ruiz. 2010. Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance. Proceedings of the VLDB Endowment 3, 1 (Sept. 2010), 460--471.
[23]
Joel Scheuner. 2017. Cloud Benchmarking -- Estimating Cloud Application Performance Based on Micro Benchmark Profiling. Master's thesis. University of Zurich. http://www.merlin.uzh.ch/publication/show/15364
[24]
Joel Scheuner, Jürgen Cito, Philipp Leitner, and Harald Gall. 2015. Cloud WorkBench: Benchmarking IaaS Providers based on Infrastructure-as-Code. In Proceedings of the 24th International World Wide Web Conference (WWW) - Demo Track.
[25]
Joel Scheuner, Philipp Leitner, Jürgen Cito, and Harald Gall. 2014. Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking. In Proceedings of the 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom).
[26]
M. Silva, M.R. Hines, D. Gallo, Qi Liu, Kyung Dong Ryu, and D. Da Silva. 2013. CloudBench: Experiment Automation for Cloud Environments. In IEEE International Conference on Cloud Engineering (IC2E). 302--311.
[27]
Will Sobel, Shanti Subramanyam, Akara Sucharitakul, Jimmy Nguyen, Hubert Wong, Arthur Klepchukov, Sheetal Patil, Armando Fox, and David Patterson. 2008. Cloudstone: Multi-platform, multi-language benchmark and measurement tools for web 2.0. (2008).
[28]
Cloud Spectator. 2017. Price-Performance Analysis of the Top 10 Public IaaS Vendors. Technical Report. Cloud Spectator.
[29]
B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker. 2017. Cloud Benchmarking For Maximising Performance of Scientific Applications. IEEE Transactions on Cloud Computing PP, 99 (2017), 1--1.
[30]
V. Vedam and J. Vemulapati. 2012. Demystifying Cloud Benchmarking Paradigm - An in Depth View. In 36th IEEE Computer Software and Applications Conference (COMPSAC). 416--421.
[31]
Edward Walker. 2008. Benchmarking Amazon EC2 for High-Performance Scientific Computing. Usenix Login 33, 5 (October 2008), 18--23.

Cited By

View all
  • (2023)Kairos: Building Cost-Efficient Machine Learning Inference Systems with Heterogeneous Cloud ResourcesProceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing10.1145/3588195.3592997(3-16)Online publication date: 7-Aug-2023
  • (2023)A multi‐faceted analysis of the performance variability of virtual machinesSoftware: Practice and Experience10.1002/spe.324453:11(2067-2091)Online publication date: 24-Jul-2023
  • (2021)RIBBONProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.1145/3458817.3476168(1-13)Online publication date: 14-Nov-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '18: Companion of the 2018 ACM/SPEC International Conference on Performance Engineering
April 2018
212 pages
ISBN:9781450356299
DOI:10.1145/3185768
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 April 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. application benchmark
  2. benchmarking
  3. cloud computing
  4. micro benchmark
  5. performance
  6. web application

Qualifiers

  • Research-article

Funding Sources

  • WASP

Conference

ICPE '18

Acceptance Rates

Overall Acceptance Rate 252 of 851 submissions, 30%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)37
  • Downloads (Last 6 weeks)2
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Kairos: Building Cost-Efficient Machine Learning Inference Systems with Heterogeneous Cloud ResourcesProceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing10.1145/3588195.3592997(3-16)Online publication date: 7-Aug-2023
  • (2023)A multi‐faceted analysis of the performance variability of virtual machinesSoftware: Practice and Experience10.1002/spe.324453:11(2067-2091)Online publication date: 24-Jul-2023
  • (2021)RIBBONProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.1145/3458817.3476168(1-13)Online publication date: 14-Nov-2021
  • (2020)Microservices: A Performance Tester's Dream or Nightmare?Proceedings of the ACM/SPEC International Conference on Performance Engineering10.1145/3358960.3379124(138-149)Online publication date: 20-Apr-2020
  • (2020)An Event-Driven Approach to Serverless Seismic Imaging in the CloudIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2020.298262631:9(2032-2049)Online publication date: 1-Sep-2020
  • (2019)Factors affecting cloud infra-service development lead timesProceedings of the 41st International Conference on Software Engineering: Software Engineering in Practice10.1109/ICSE-SEIP.2019.00033(233-242)Online publication date: 27-May-2019
  • (2018)A Benchmark Model for the Creation of Compute Instance Performance FootprintsInternet and Distributed Computing Systems10.1007/978-3-030-02738-4_19(221-234)Online publication date: 17-Oct-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media