Skip to main content
Log in

Using confidence interval to summarize the evaluating results of DSM systems

  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Distributed Shared Memory (DSM) systems have gained popular acceptance by combining the scalability and low cost of distributed system with the ease of use of single address space. Many new hardware DSM and software DSM systems have been proposed in recent years. In general, benchmarking is widely used to demonstrate the performance advantages of new systems. However, the common method used to summarize the measured results is the arithmetic mean of ratios, which is incorrect in some cases. Furthermore, many published papers list a lot of data only, and do not summarize them effectively, which confuse users greatly. In fact, many users want to get a single number as conclusion, which is not provided in old summarizing techniques. Therefore, a new data-summarizing technique based on confidence interval is proposed in this paper. The new technique includes two data-summarizing methods: (1) paired confidence interval method; (2) unpaired confidence interval method. With this new technique, it is concluded that at some confidence one system is better than others. Four examples are shown to demonstrate the advantages of this new technique. Furthermore, with the help of confidence level, it is proposed to standardize the benchmarks used for evaluating DSM systems so that a convincing result can be got. In addition, the new summarizing technique fits not only for evaluating DSM systems, but also for evaluating other systems, such as memory system and communication systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Lenoski D E, Ludon J, Gharachorloo Ket al. The Stanford dash multiprocessor.IEEE Computer, March 1992, 25(3): 63–79.

    Google Scholar 

  2. Kuskin J, Ofelt D, Heinrich Met al. The Stanford flash multiprocessor. InProc. the 21st Annual Int. Symp. Computer Architecture (ISCA’94), April 1994, pp. 302–313.

  3. Agarwal A, Chaiken D, Johnson Ket al. The MIT alewife machine: A large-scale distributed-memory multiprocessor. InScalable Shared Memory Multiprocessors, Dubois M, Thakkar M M (eds.), Kluwer Academic Publishers, 1992, pp. 239–261.

  4. Torrellas J, Padua D. The Illinois aggressive Coma multiprocessor project (i-acoma). InProc. the 6th Symp. the Frontiers of Massively Parallel Computing (Frontiers’96), October 1996.

  5. Carter J B, Bennett J K, Zwaenepoel W. Implementation and performance of Munin. InProc. the 13th ACM Symp. Operating Systems Principles (SOSP-13), October 1991, pp. 152–164.

  6. Keleher P, Dwarkadas S, Cox A L, Zwaenepoel W. Treadmarks: Distributed shared memory on standard workstations and operating systems. InProc. the Winter 1994 USENIX Conference, January 1994, pp. 115–131.

  7. Li K. Ivy: A shared virtual memory system for parallel computing. InProc. the 1988 Int. Conf. Parallel Processing (ICPP’88), August 1988, II: 94–101.

  8. Bershad B N, Zekauskas M J, Sawdon W A. The Midway distributed shared memory system. InProc. the 38th IEEE Int. Computer Conf. (COMPCON Spring’93), February 1993, pp. 528–537.

  9. Khandekar D R. Quarks: Distributed shared memory as a building block for complex parallel and distributed systems. Master’s thesis, Department of Computer Science, The University of Utah, March 1996.

  10. Keleher P. The relative importance of concurrent writers and weak consistency models. InProc. the 16th Int. Conf. Distributed Computing Systems (ICDCS-16), May 1996, pp. 91–98.

  11. Koch P T, Fowler R J, Jul E B. Message-driven relaxed consistency in a software distributed shared memory. InProc. the 1st Symp. Operating Systems Design and Implementation (OSDI’94), November 1994, pp. 75–85.

  12. Hu W, Shi W, Tang Z. JIAJIA: An SVM system based on a new cache coherence protocol. InProc. the 7th High Performance Computing and Networking (HPCN’99), April 1999, LNCS 1593, pp. 463–472, Springer-Verlag, Amersterdan, Netherlands.

  13. Jain R. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, Modeling. John Wiley & Sons, Inc., 1991.

  14. Thirikamol K, Keleher P. Multi-threading and remote latency in software DSMs. InProc. the 17th Int. Conf. Distributed Computing Systems (ICDCS-17), May 1997.

  15. Veenstra J, Fowler R. Mint: A front end for efficient simulation of shared memory multiprocessors. InProc. the 2nd Int. Workshop on Modeling, Analysis, and Simulation of Computers and Telecommunication Systems (MASCOTS’94), February 1994, pp. 201–207.

  16. Nguyen A, Michael M, Sharma A, Torrellas J. The augment multiprocessor simulation toolkit for Intel x86 architectures. InProceedings of 1996 International Conference on Computer Design, October 1996.

  17. Stoller L, Kuramkote R, Swanson M. Paint-pa instruction set interpreter. Technical Report, University of Utah, Computer Scienece Department, March 1996.

  18. Herrod S. Tangolite: A multiprocessor simulation environment. Technical Report, Stanford University, November 1993.

  19. Patterson D, Hennessy J. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, Inc., 1996.

  20. Adve S V, Cox A L, Dwarkadas S, Rajamony R, Zwaenepoel W. A comparison of entry consistency and lazy release consistency implementations. InProc. the 2nd IEEE Symp. High-Performance Computer Architecture (HPCA-2), February 1996, pp. 26–37.

  21. Singh J P, Wolf-Doetrich Weber, Anoop Gupta. Splash: Stanford parallel applications for shared memory.ACM Computer Architecture News, March 1992, 20(1): 5–44.

    Article  Google Scholar 

  22. Woo S, Ohara M, Torrie E, Singh J P, Gupta A. The Splash-2 programs: Charachterization and methodological considerations. InProc. the 22nd Annual Int. Symp. Computer Architecture (ISCA’95), June 1995, pp. 24–36.

  23. Lu H, Dwarkadas S, Cox A L, Zwaenepoel W. Quantifying the performance differences between PVM and Treadmarks.Journal of Parallel and Distributed Computing, June 1997, 43(2): 65–78.

    Article  Google Scholar 

  24. Yang L, Nguyen A, Torrellas J. How processor-memory intergration affects the design of DSMs. InWorkshop on Mixing Logic and DRAM: Chips that Compute and Remember, ISCA’97, June 1997.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shi Weisong.

Additional information

The work of this paper is supported by the CLIMBING Program of China and the National Natural Science Foundation of China under Grant No. 69896250.

For the biography ofSHI Weisong, please see P.205 of the May 1999 issue of this Journal.

For the biography ofTANG Zhimin, please see P.205 of the May 1999 issue of this Journal.

SHI Jinsong, graduated from East China Normal University (ECNU) and received the B.S. degree in July 1995. He received the M.S. degree from the ECNU in July 1998. Currently, he is a Lecturer at Dept. of Appied Mathematics, East China University of Science and Technology. His major interests include graph theory, network flow analysis, parallel computing.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shi, W., Tang, Z. & Shi, J. Using confidence interval to summarize the evaluating results of DSM systems. J. Comput. Sci. & Technol. 15, 73–83 (2000). https://doi.org/10.1007/BF02951929

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02951929

Keywords

Navigation