Skip to main content

Micro-benchmarks for Cluster OpenMP Implementations: Memory Consistency Costs

  • Conference paper
OpenMP in a New Era of Parallelism (IWOMP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5004))

Included in the following conference series:

  • 923 Accesses

Abstract

The OpenMP memory model allows for a temporary view of shared memory that only needs to be made consistent when barrier or flush directives, including those that are implicit, are encountered. While this relaxed memory consistency model is key to developing cluster OpenMP implementations, it means that the memory performance of any given implementation is greatly affected by which memory is used, when it is used, and by which threads. In this work we propose a micro-benchmark that can be used to measure memory consistency costs and present results for its application to two contrasting cluster OpenMP implementations, as well as comparing these results with data obtained from a hardware supported OpenMP environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Adve, S.V., Gharachorloo, K.: Shared memory consistency models: A tutorial. IEEE Computer 29(12), 66–76 (1996)

    Google Scholar 

  2. Basumallik, A., Eigenmann, R.: Towards automatic translation of OpenMP to MPI. In: Arvind, Rudolph, L. (eds.) Proceedings of the 19th Annual International Conference on Supercomputing (19th ICS 2005), Cambridge, Massachusetts, USA, jun 2005, pp. 189–198. ACM Press, New York (2005)

    Google Scholar 

  3. Bull, J.M.: Measuring synchronisation and scheduling overheads in OpenMP. In: Proc. of the European Workshop on OpenMP (EWOMP 1999) (1999), citeseer.ist.psu.edu/bull99measuring.html

  4. Bull, J.M., O’Neill, D.: A microbenchmark suite for OpenMP 2.0. ACM SIGARCH Computer Architecture News 29(5), 41–48 (2001)

    Article  Google Scholar 

  5. Harada, H., Tezuka, H., Hori, A., Sumimoto, S., Takahashi, T., Ishikawa, Y.: SCASH: Software DSM using high performance network on commodity hardware and software. In: Eighth Workshop on Scalable Shared-memory Multiprocessors, May 1999, pp. 26–27. ACM Press, New York (1999)

    Google Scholar 

  6. Hoeflinger, J.P.: Extending OpenMP to clusters. White Paper Intel Corporation (2006)

    Google Scholar 

  7. Huang, L., Chapman, B.M., Liu, Z.: Towards a more efficient implementation of OpenMP for clusters via translation to global arrays. Parallel Computing 31(10-12), 1114–1139 (2005)

    Article  Google Scholar 

  8. Karlsson, S., Lee, S.-W., Brorsson, M.: A fully compliant OpenMP implementation on software distributed shared memory. In: Sahni, S.K., Prasanna, V.K., Shukla, U. (eds.) HiPC 2002. LNCS, vol. 2552, pp. 195–208. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  9. Kee, Y.-S., Kim, J.-S., Ha, S.: ParADE: An OpenMP programming environment for SMP cluster systems. In: Supercomputing 2003, p. 6. ACM Press, New York (2003)

    Google Scholar 

  10. Keleher, P., Cox, A., Dwarkadas, S., Zwaenepoel, W.: TreadMarks: Distributed memory on standard workstations and operating systems. In: Proceedings of the 1994 Winter Usenix Conference, pp. 115–131 (1994)

    Google Scholar 

  11. Kusano, K., Satoh, S., Sato, M.: Performance Evaluation of the Omni OpenMP Compiler. In: Valero, M., Joe, K., Kitsuregawa, M., Tanaka, H. (eds.) ISHPC 2000. LNCS, vol. 1940, pp. 403–414. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  12. Müller, M.S.: A Shared Memory Benchmark in OpenMP. In: Zima, H.P., Joe, K., Sato, M., Seo, Y., Shimasaki, M. (eds.) ISHPC 2002. LNCS, vol. 2327, pp. 380–389. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  13. OpenMP Architecture Review Board. OpenMP Application Program Interface Version 2.5 (May 2005)

    Google Scholar 

  14. Sato, M., Harada, H., Hasegawa, A.: Cluster-enabled OpenMP: An OpenMP compiler for the SCASH software distributed shared memory system. Scientific Programming 9(2-3), 123–130 (2001)

    Google Scholar 

  15. Sato, M., Harada, H., Ishikawa, Y.: OpenMP compiler for software distributed shared memory system SCASH. In: Workshop on Workshop on OpenMP Applications and Tool (WOMPAT 2000), San Diego (2000)

    Google Scholar 

  16. Sato, M., Kusano, K., Satoh, S.: OpenMP benchmark using PARKBENCH. In: Proc. of 2nd European Workshop on OpenMP, Edinburgh, U.K (September 2000), citeseer.ist.psu.edu/article/sato00openmp.html

  17. Tezuka, H., Hori, A., Ishikawa, Y.: PM: A high-performance communication library for multi-user parallel environments. Tech. Rpt. TR-96015, RWC (November 1996)

    Google Scholar 

  18. Wong, H.J., Rendell, A.P.: The design of MPI based distributed shared memory systems to support OpenMP on clusters. In: Proceedings of IEEE Cluster 2007 (September 2007)

    Google Scholar 

  19. Zeffer, H., Hagersten, E.: A case for low-complexity MP architectures. In: Proceedings of Supercomputing 2007 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Rudolf Eigenmann Bronis R. de Supinski

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wong, H.J., Cai, J., Rendell, A.P., Strazdins, P. (2008). Micro-benchmarks for Cluster OpenMP Implementations: Memory Consistency Costs. In: Eigenmann, R., de Supinski, B.R. (eds) OpenMP in a New Era of Parallelism. IWOMP 2008. Lecture Notes in Computer Science, vol 5004. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79561-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-79561-2_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-79560-5

  • Online ISBN: 978-3-540-79561-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics