ABSTRACT
Sustained performance is the amount of useful work a system can produce in a given amount of time on a regular basis. How much useful work a system can achieve is difficult to assess in a simple, general manner because different communities have their own views of what useful work means and because a large number of system characteristics influence its usefulness. Yet, we, as a community, intuitively, and sometimes explicitly, know when a system is more useful than another. On the other hand, we also know when measures do not accurately portray a system's usefulness. This report will review the important concepts of measuring sustained performance, discuss different approaches for doing the measurements and point out some of the issues that prevent our community from developing common, effective measures.
- Culler, David E., and Jaswinder Pal Singh. Parallel Computer Architecture: A Hardware/Software Approach. First. Edited by Denise P. M. Penrose. San Francisco, CA: Morgan Kaufmann Publishers, Inc., 1999. Google ScholarDigital Library
- "NERSC SSP Project." NERSC Projects. 2004. http://www.nersc.gov/projects/ssp.php (accessed 2004).Google Scholar
- Moore, Gordon E. "Cramming more components onto integrated circuits." Electronics Magazine, 1965.Google Scholar
- Dongarra, Jack. Performance of Various Computers Using Standard Linear Equations Software. Computer Science Technical Report, University of Tennessee, Knoxville TN, 37996: University of Tennessee, 1985.Google Scholar
- Earl, Joseph, Christopher G. Willard, and Debra Goldfarb. "IDC Workstations and High-Performance Systems Bulletin." HPC User Forum: First Annual Meeting Notes. 2000.Google Scholar
- MVAPICH Ping-Pong Benchmark. https://mvapich.cse.ohio-state.edu/svn/mpi-benchmarks/branches/OMB-3.1/osu_latency.c (accessed 2008).Google Scholar
- Streams Benchmark. http://www.cs.virginia.edu/stream/ (accessed 2008.Google Scholar
- McMahon, F. The Livermore Fortran Kernels: A computer test of numerical performance range. Technical Report, Lawrence Livermore National Laboratory, Livermore, CA: University of California, 1986.Google Scholar
- Bailey, David, et al. The NAS Parallel Benchmarks. Technical Report, NAS Research Branch, NAS (NASA Advanced Supercomputing Division), Moffett Field, CA: NASA Ames, March 1994.Google Scholar
- SPEC Benchmarks. 2000. http://www.spec.org (accessed 2008).Google Scholar
- Top 500 List. 2008. http://www.top500.org (accessed 2008).Google Scholar
- LINPACK Download. 2008. http://www.netlib.org/LINPACK (accessed 2008).Google Scholar
- Bucher, Ingrid, and Joanne Martin. Methodology for Characterizing a Scientific Workload. Technical Report, Los Alamos, NM 87545: Los Alamos National Laboratory, 1982.Google Scholar
- Flemming, Philip J., and John J. Wallace. "How not to lie with statistics: the correct way to summarize benchmark results." Communications of the ACM, 29, no. 3 (March 1986) Google ScholarDigital Library
- Smith, J. E. "Characterizing Computer Performance with a Single Number." Communications of the ACM (Association of Computing Machinary) 31, no. 10 (October 1988): 1202--1206. Google ScholarDigital Library
- Lilja, David. Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000. Google ScholarDigital Library
- Patterson, David, and John Hennessey. Computer Architecture -- A Quantitative Approach. Second. Burlington, MA: Morgan Kaufmann Publishers, 1996. Google ScholarDigital Library
- John, Lizy Kurian, and Lieven Kurian,. Performance Evaluation and Benchmarking. 6000 Broken Sound Parkway, NW, Suite 300, Boca Raton, FL, 33487-2742: CRC Press Taylor and Francis Group, 2006Google Scholar
- Helin, J., and K. Kaski. "Performance Analysis of High-Speed Computers." Proceedings of the 1989 ACM/IEEE Conference on Supercomputing. Reno, NV: Association for Computing Machinery, 1989. 797--808. Google ScholarDigital Library
- Mashey, John R. "War of the Benchmark Means: Time for a Truce." ACM SIGARCH Computer Architecture News (Association for Computing Machinery) 32, no. 4 (September 2004) Google ScholarDigital Library
- Ostle, Bernard. Statistics in Research. Ames, IA: The Iowa State University Press, 1972Google Scholar
- Bailey, David. "Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers." Supercomputing Review, August 1991: 54--55.Google Scholar
- Bailey, David H., "Misleading Performance Reporting in the Supercomputing Field", Scientific Programming, vol. 1., no. 2 (Winter 1992), pg. 141--151 Google ScholarDigital Library
- Patterson, David A., and John L. Hennessy. Computer Organization and Design -- The Hardware/Software Interface. Third. Burlington, MA: Morgan Kaufmann Publishers, 2007. Google ScholarDigital Library
- Bucher, Ingrid, and Joanne Martin. Methodology for Characterizing a Scientific Workload. Technical Report, Los Alamos, NM 87545: Los Alamos National Laboratory, 1982.Google Scholar
- Smith, J. E. "Characterizing Computer Performance with a Single Number." Communications of the ACM (Association of Computing Machinary) 31, no. 10 (October 1988): 1202--1206 Google ScholarDigital Library
- Lilja, David. Measuring Computer Performance: A Practitioner's Guide. Cambridge University Press, 2000. Google ScholarDigital Library
- John, Lizy Kurian. "More on finding a Single Number to Indicate Overall Performance of a Benchmark Suite,." ACM SIGARCH Computer Architecture News (Association for Computing Machinery) 31, no. 1 (March 2004). Google ScholarDigital Library
- Digitial Review. "At the speed of light through a PRISM." Digital Review, December 1998: 39--42.Google Scholar
- http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf06573Google Scholar
- Selby, Samuel M., ed. CRC Standard Mathematical Tables Sixteenth Edition. 18901 Canwood Parkway, Cleveland, Ohio 44128: The Chemical Rubber Co., 1968.Google Scholar
- Oliker, Lenoid, Julian Borrill, Jonathan Carter, David Skinner, and Rupak Biswas. "Integrated Performance Monitoring of a Cosmology Application on Leading HEC Platforms." International Conference on Parallel Processing: ICPP. 2005. Google ScholarDigital Library
- How to measure useful, sustained performance
Recommendations
Motivations for Sustained Participation in Crowdsourcing: Case Studies of Citizen Science on the Role of Talk
HICSS '15: Proceedings of the 2015 48th Hawaii International Conference on System SciencesThe paper explores the motivations of volunteers in a large crowd sourcing project and contributes to our understanding of the motivational factors that lead to deeper engagement beyond initial participation. Drawing on the theory of legitimate ...
Understanding Sustained Participation in Open Source Software Projects
Prior research into open source software (OSS) developer participation has emphasized individuals' motivations for joining these volunteer communities, but it has failed to explain why people stay or leave in the long run. Building upon Lave and Wenger'...
Composed fuzzy measure of maximized L-measure and delta-measure
The well known fuzzy measures, λ-measure and P-measure, have only one formulaic solution. Two multivalent fuzzy measures with infinitely many solutions, L-measure and δ-measure, were proposed by our previous works, but the former do not include the ...
Comments