skip to main content
10.1145/2063348.2063351acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

How to measure useful, sustained performance

Published:12 November 2011Publication History

ABSTRACT

Sustained performance is the amount of useful work a system can produce in a given amount of time on a regular basis. How much useful work a system can achieve is difficult to assess in a simple, general manner because different communities have their own views of what useful work means and because a large number of system characteristics influence its usefulness. Yet, we, as a community, intuitively, and sometimes explicitly, know when a system is more useful than another. On the other hand, we also know when measures do not accurately portray a system's usefulness. This report will review the important concepts of measuring sustained performance, discuss different approaches for doing the measurements and point out some of the issues that prevent our community from developing common, effective measures.

References

  1. Culler, David E., and Jaswinder Pal Singh. Parallel Computer Architecture: A Hardware/Software Approach. First. Edited by Denise P. M. Penrose. San Francisco, CA: Morgan Kaufmann Publishers, Inc., 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. "NERSC SSP Project." NERSC Projects. 2004. http://www.nersc.gov/projects/ssp.php (accessed 2004).Google ScholarGoogle Scholar
  3. Moore, Gordon E. "Cramming more components onto integrated circuits." Electronics Magazine, 1965.Google ScholarGoogle Scholar
  4. Dongarra, Jack. Performance of Various Computers Using Standard Linear Equations Software. Computer Science Technical Report, University of Tennessee, Knoxville TN, 37996: University of Tennessee, 1985.Google ScholarGoogle Scholar
  5. Earl, Joseph, Christopher G. Willard, and Debra Goldfarb. "IDC Workstations and High-Performance Systems Bulletin." HPC User Forum: First Annual Meeting Notes. 2000.Google ScholarGoogle Scholar
  6. MVAPICH Ping-Pong Benchmark. https://mvapich.cse.ohio-state.edu/svn/mpi-benchmarks/branches/OMB-3.1/osu_latency.c (accessed 2008).Google ScholarGoogle Scholar
  7. Streams Benchmark. http://www.cs.virginia.edu/stream/ (accessed 2008.Google ScholarGoogle Scholar
  8. McMahon, F. The Livermore Fortran Kernels: A computer test of numerical performance range. Technical Report, Lawrence Livermore National Laboratory, Livermore, CA: University of California, 1986.Google ScholarGoogle Scholar
  9. Bailey, David, et al. The NAS Parallel Benchmarks. Technical Report, NAS Research Branch, NAS (NASA Advanced Supercomputing Division), Moffett Field, CA: NASA Ames, March 1994.Google ScholarGoogle Scholar
  10. SPEC Benchmarks. 2000. http://www.spec.org (accessed 2008).Google ScholarGoogle Scholar
  11. Top 500 List. 2008. http://www.top500.org (accessed 2008).Google ScholarGoogle Scholar
  12. LINPACK Download. 2008. http://www.netlib.org/LINPACK (accessed 2008).Google ScholarGoogle Scholar
  13. Bucher, Ingrid, and Joanne Martin. Methodology for Characterizing a Scientific Workload. Technical Report, Los Alamos, NM 87545: Los Alamos National Laboratory, 1982.Google ScholarGoogle Scholar
  14. Flemming, Philip J., and John J. Wallace. "How not to lie with statistics: the correct way to summarize benchmark results." Communications of the ACM, 29, no. 3 (March 1986) Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Smith, J. E. "Characterizing Computer Performance with a Single Number." Communications of the ACM (Association of Computing Machinary) 31, no. 10 (October 1988): 1202--1206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lilja, David. Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Patterson, David, and John Hennessey. Computer Architecture -- A Quantitative Approach. Second. Burlington, MA: Morgan Kaufmann Publishers, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. John, Lizy Kurian, and Lieven Kurian,. Performance Evaluation and Benchmarking. 6000 Broken Sound Parkway, NW, Suite 300, Boca Raton, FL, 33487-2742: CRC Press Taylor and Francis Group, 2006Google ScholarGoogle Scholar
  19. Helin, J., and K. Kaski. "Performance Analysis of High-Speed Computers." Proceedings of the 1989 ACM/IEEE Conference on Supercomputing. Reno, NV: Association for Computing Machinery, 1989. 797--808. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Mashey, John R. "War of the Benchmark Means: Time for a Truce." ACM SIGARCH Computer Architecture News (Association for Computing Machinery) 32, no. 4 (September 2004) Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Ostle, Bernard. Statistics in Research. Ames, IA: The Iowa State University Press, 1972Google ScholarGoogle Scholar
  22. Bailey, David. "Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers." Supercomputing Review, August 1991: 54--55.Google ScholarGoogle Scholar
  23. Bailey, David H., "Misleading Performance Reporting in the Supercomputing Field", Scientific Programming, vol. 1., no. 2 (Winter 1992), pg. 141--151 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Patterson, David A., and John L. Hennessy. Computer Organization and Design -- The Hardware/Software Interface. Third. Burlington, MA: Morgan Kaufmann Publishers, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Bucher, Ingrid, and Joanne Martin. Methodology for Characterizing a Scientific Workload. Technical Report, Los Alamos, NM 87545: Los Alamos National Laboratory, 1982.Google ScholarGoogle Scholar
  26. Smith, J. E. "Characterizing Computer Performance with a Single Number." Communications of the ACM (Association of Computing Machinary) 31, no. 10 (October 1988): 1202--1206 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Lilja, David. Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. John, Lizy Kurian. "More on finding a Single Number to Indicate Overall Performance of a Benchmark Suite,." ACM SIGARCH Computer Architecture News (Association for Computing Machinery) 31, no. 1 (March 2004). Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Digitial Review. "At the speed of light through a PRISM." Digital Review, December 1998: 39--42.Google ScholarGoogle Scholar
  30. http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf06573Google ScholarGoogle Scholar
  31. Selby, Samuel M., ed. CRC Standard Mathematical Tables Sixteenth Edition. 18901 Canwood Parkway, Cleveland, Ohio 44128: The Chemical Rubber Co., 1968.Google ScholarGoogle Scholar
  32. Oliker, Lenoid, Julian Borrill, Jonathan Carter, David Skinner, and Rupak Biswas. "Integrated Performance Monitoring of a Cosmology Application on Leading HEC Platforms." International Conference on Parallel Processing: ICPP. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. How to measure useful, sustained performance

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SC '11: State of the Practice Reports
          November 2011
          242 pages
          ISBN:9781450311397
          DOI:10.1145/2063348

          Copyright © 2011 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 November 2011

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,516of6,373submissions,24%
        • Article Metrics

          • Downloads (Last 12 months)4
          • Downloads (Last 6 weeks)1

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader