Abstract
The work on performance benchmarking has started long ago. Ranging from simple benchmarks that target a very specific system or component to very complex benchmarks for complex infrastructures, performance benchmarks have contributed to improve successive generations of systems. However, the fact that nowadays most systems need to guarantee high availability and reliability shows that it is necessary to shift the focus from measuring pure performance to the measurement of both performance and dependability. Research on dependability benchmarking has started in the beginning of this decade, having already led to the proposal of several benchmarks. However, no dependability benchmark has yet achieved the status of a real benchmark endorsed by a standardization body or corporation. In this paper we argue that standardization bodies must shift focus and start including dependability metrics in their benchmarks. We present an overview of the state-of-the-art on dependability benchmarking and define a set of research needs and challenges that have to be addressed for the establishment of real dependability benchmarks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Transaction Processing Performance Council, http://www.tpc.org/
Standard Performance Evaluation Corporation, http://www.spec.org/
Curnow, H.J., Wichmann, B.A.: A synthetic benchmark. Computer Journal 19(1), 43–49 (1976)
Bitton, D., DeWitt, D.J., Turbyfill, C.: Benchmarking Database Systems – A Systematic Approach. In: 9th Intl. Conf. on Very Large Data Bases. VLDB Endowment (1983)
Anon, et al: A Measure of Transaction Processing Power: Datamat (April 1, 1985)
IFIP WG10.4 on Dependable Computing And Fault Tolerance, http://www.dependability.org/wg10.4/
Laprie, J.-C.: Dependable Computing: Concepts, Limits, Challenges. In: 25th Int. Symp. on Fault-Tolerant Computing, FTCS-25. IEEE Press, Los Alamitos (1995)
Trivedi, K.S., Haverkort, B.R., Rindos, A., Mainkar, V.: Methods and Tools for Reliability and Performability: Problems and Perspectives. In: Haring, G., Kotsis, G. (eds.) TOOLS 1994. LNCS, vol. 794, pp. 1–24. Springer, Heidelberg (1994)
Jenn, E., Arlat, J., Rimén, M., Ohlsson, J., Karlsson, J.: Fault Injection into VHDL Models: The MEFISTO Tool. In: Randell, B., Laprie, J.-C., Kopetz, H., Littlewood, B. (eds.) Predictably Dependable Computing Systems. LNCS, pp. 329–346. Springer, Berlin (1995)
Gray, J.: A Census of Tandem System Availability Between 1985 and 1990. IEEE Transactions on Reliability R-39(4), 409–418 (1990)
Hsueh, M.-C., Tsai, T.K., Iyer, R.K.: Fault Injection Techniques and Tools. IEEE Computer 30(4), 75–82 (1997)
Carreira, J., Madeira, H., Silva, J.G.: Xception: A Technique for the Experimental Evaluation of Dependability in Modern Computers. IEEE Trans. on Software Engineering 24(2), 125–136 (1998)
Koopman, P., DeVale, J.: Comparing the Robustness of POSIX Operating Systems. In: 29th International Symposium on Fault-Tolerant Computing, FTCS-29, pp. 30–37 (1999)
Arlat, J., Fabre, J.-C., Rodríguez, M., Salles, F.: Dependability of COTS Microkernel-based Systems. IEEE Transactions on Computers 51(2) (2002)
Wilson, D., Murphy, B., Spainhower, L.: Progress on Defining Standardized Classes of Computing the Dependability of Computer Systems. In: DSN 2002 Workshop on Dependability Benchmarking, pp. F1–5 (2002)
Kanoun, K., Spainhower, L. (eds.): Dependability Benchmarking for Computer Systems. Wiley, Chichester (2008)
DBench Project, Project funded by the European Community under the “Information Society Technology” Programme (1998-2002), http://www.dbench.org/
Kalakech, A., Kanoun, K., Crouzet, Y., Arlat, A.: Benchmarking the Dependability of Windows NT, 2000 and XP. In: International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)
Kanoun, K., Crouret, Y.: Dependability Benchmarking for Operating Systems. International Journal of Performance Engineering 2(3), 275–287 (2006)
Transaction Processing Performance Council: TPC Benchmark C, Standard Specification, Version 5.9. Transaction Processing Performance Council (2007)
Moreira, F., Maia, R., Costa, D., Duro, N., Rodríguez-Dapena, P., Hjortnaes, K.: Static and Dynamic Verification of Critical Software for Space Applications. In: Data Systems In Aerospace, DASIA 2003 (2003)
Ruiz, J.–C., Yuste, P., Gil, P., Lemus, L.: On Benchmarking the Dependability of Automotive Engine Control Applications. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)
Vieira, M., Madeira, H.: A Dependability Benchmark for OLTP Application Environments. In: 29th Intl. Conf. on Very Large Data Bases, VLDB 2003. VLDB Endowment (2003)
Buchacker, K., Dal Cin, M., Hoxer, H.-J., Karch, R., Sieh, V., Tschache, O.: Reproducible Dependability Benchmarking Experiments Based on Unambiguous Benchmark Setup Descriptions. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003. IEEE Press, Los Alamitos (2003)
Durães, J., Vieira, M., Madeira, H.: Dependability Benchmarking of Web-Servers. In: The 23rd International Conference on Computer Safety, Reliability and Security, SAFECOMP 2004. IEEE Press, Los Alamitos (2004)
Standard Performance Evaluation Corporation: SPECweb99 Release 1.02 (Design Document. Standard Performance Evaluation Corporation (2000)
Brown, A., Patterson, D.A.: Towards Availability Benchmarks: A Cases Study of Software RAID Systems. In: 2000 USENIX Annual Technical Conf. USENIX Association (2000)
Brown, A., Patterson, D.A.: To Err is Human. In: First Workshop on Evaluating and Architecting System Dependability, EASY (2001)
Brown, A., Chung, L.C., Patterson, D.A.: Including the Human Factor in Dependability Benchmarks. In: DSN 2002 Workshop on Dependability Benchmarking (2002)
Brown, A., Chung, L., Kakes, W., Ling, C., Patterson, D.A.: Dependability Benchmarking of Human-Assisted Recovery Processes. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)
Zhu, J., Mauro, J., Pramanick, I.: R3 - A Framework for Availability Benchmarking. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003, pp. B-86–87. IEEE Press, Los Alamitos (2003)
Zhu, J., Mauro, J., Pramanick, I.: Robustness Benchmarking for Hardware Maintenance Events. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003, pp. 115–122. IEEE Press, Los Alamitos (2003)
Mauro, J., Zhu, J., Pramanick, I.: The System Recovery Benchmark. In: 2004 Pacific Rim International Symposium on Dependable Computing, PRDC 2004. IEEE Press, Los Alamitos (2004)
Elling, R., Pramanick, I., Mauro, J., Bryson, W., Tang, D.: Analytical RAS Benchmarks. In: Kanoun, K., Spainhower, L. (eds.) Dependability Benchmarking for Computer Systems. Wiley, Chichester (2008)
Constantinescu, C.: Neutron SER characterization of microprocessors. In: IEEE Dependable Systems and Networks Conference, pp. 754–759. IEEE Press, Los Alamitos (2005)
Constantinescu, C.: Dependability benchmarking using environmental tools. In: IEEE Annual Reliability and Maintanability Symposium, pp. 567–571. IEEE Press, Los Alamitos (2005)
IBM Autonomic Computing Initiative, http://www.research.ibm.com/autonomic/
Lightstone, S., Hellerstein, J., Tetzlaff, W., Janson, P., Lassettre, E., Norton, C., Rajaraman, B., Spainhower, L.: Towards Benchmarking Autonomic Computing Maturity. In: First IEEE Conference on Industrial Automatics, INDIN-2003. IEEE Press, Los Alamitos (2003)
Brown, A., Hellerstein, J., Hogstrom, M., Lau, T., Lightstone, S., Shum, P., Yost, M.P.: Benchmarking Autonomic Capabilities: Promises and Pitfalls. In: 1st International Conference on Autonomic Computing, ICAC 2004 (2004)
Bondavalli, A., Ceccarelli, A., Falai, L., Karlsson, J., Kocsis, I., Lollini, P., Madeira, H., Majzik, I., Montecchi, L., van Moorsel, A., Strigini, L., Vadursi, M., Vieira, M.: Preliminary Research Roadmap. AMBER Project – Assessing, Measuring and Benchmarking Resilience, IST – 216295 AMBER. AMBER Project (2008), http://www.amber-project.eu/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vieira, M., Madeira, H. (2009). From Performance to Dependability Benchmarking: A Mandatory Path. In: Nambiar, R., Poess, M. (eds) Performance Evaluation and Benchmarking. TPCTC 2009. Lecture Notes in Computer Science, vol 5895. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10424-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-10424-4_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-10423-7
Online ISBN: 978-3-642-10424-4
eBook Packages: Computer ScienceComputer Science (R0)