Skip to main content
Log in

MPI Correctness Checking for OpenMP/MPI Applications

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

The MPI interface is the de-facto standard for message passing applications, but it is also complex and defines several usage patterns as erroneous. A current trend is the investigation of hybrid programming techniques that use MPI processes and multiple threads per process. As a result, more and more MPI implementations support multi-threading, which are restricted by several rules of the MPI standard. In order to support developers of hybrid MPI applications, we present extensions to the MPI correctness checking tool Marmot. Basic extensions make it aware of OpenMP multi-threading, while further ones add new correctness checks. As a result, it is possible to detect errors that actually occur in a run with Marmot. However, some errors only occur for certain execution orders, thus, we present a novel approach using artificial data races, which allows us to employ thread checking tools, e.g., Intel Thread Checker, to detect MPI usage errors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Smith, L., Bull, M.: Development of mixed mode MPI/OpenMP applications. In: Scientific Programming, 9(2-3/2001):83G98. Presented at Workshop on OpenMP Applications and Tools (WOMPAT 2000), pp. 6–7 (2000)

  2. Drosinos, N., Koziris, N.: Performance comparison of pure MPI vs. hybrid MPI-OpenMP parallelization models on SMP clusters. In: Parallel and Distributed Processing Symposium, International, vol. 1, p. 15a (2004)

  3. Rabenseifner, R.: Communication bandwidth of parallel programming models on hybrid architectures. In: ISHPC ’02: Proceedings of the 4th International Symposium on High Performance Computing, London, UK, pp. 401–412. Springer-Verlag (2002)

  4. Message Passing Interface Forum: MPI-2: Extensions to the Message-Passing Interface, 1997. http://www.mpi-forum.org/docs/mpi-20.ps

  5. Message Passing Interface Forum.: MPI: A Message-Passing Interface Standard, 1995. http://www.mpi-forum.org/docs/mpi-10.ps

  6. Vetter, J.S., de Supinski, B.R.: Dynamic software testing of MPI applications with umpire. In: Supercomputing ’00: Proceedings of the 2000 ACM/IEEE Conference on Supercomputing (CDROM), Washington, DC, USA, p. 51. IEEE Computer Society (2000)

  7. Luecke G.R., Zou Y., Coyle J., Hoekstra J., Kraeva M.: Deadlock detection in MPI programs. Concurr. Comput. Pract. Experience 14(11), 911–932 (2002)

    Article  MATH  Google Scholar 

  8. DeSouza, J., Kuhn, B., de Supinski, B.R., Samofalov, V., Zheltov, S., Bratanov, S.: Automated, scalable debugging of MPI programs with Intel® Message Checker. In: SE-HPCS ’05: Proceedings of the Second International Workshop on Software Engineering for High Performance Computing System Applications, New York, NY, USA, pp. 78–82. ACM (2005)

  9. Krammer, B., Bidmon, K., Müller, M.S., Resch, M.M.: MARMOT: an MPI analysis and checking tool. In: Joubert, G.R., Nagel, W.E., Peters, F.J., Walter, W.V. (eds.) PARCO. Volume 13 of Advances in Parallel Computing, pp. 493–500. Elsevier (2003)

  10. Intel Corporation: Intel Thread Checker (2008). http://www.intel.com/support/performancetools/threadchecker

  11. SUN Microsystems: Thread Analyzer (2008). http://developers.sun.com/sunstudio/downloads/tha

  12. Gel A., Pannala S., Syamlal M., O’Brien T.J., Gel E.S.: Comparison of frameworks for a next-generation multiphase flow solver, MFIX: a group decision-making exercise. Concurr. Comput. Pract. Experience 19(5), 609–624 (2007)

    Article  Google Scholar 

  13. Michalakes, J., Dudhia, J., Gill, D., Henderson, T., Klemp, J., Skamarock, W., Wang, W.: The weather reseach and forecast model: software architecture and performance. In: Mozdzynski, G. (ed.) 11th ECMWF Workshop on the Use of High Performance Computing in Meteorology, pp. 25–29 (2004)

  14. Rabenseifner, R., Koniges, A.E.: The parallel communication and I/O bandwidth benchmarks: b_eff and b_eff_io. In: Proceedings of 43rd Cray User Group Conference, Indian (2001)

  15. Jin H., der Wijngaart R.F.V.: Performance characteristics of the multi-zone NAS parallel benchmarks. J. Parallel Distrib. Comput. 66(5), 674–685 (2006)

    Article  MATH  Google Scholar 

  16. Mühlenfeld A., Wotawa F.: Fault detection in multi-threaded c++ server applications. Electron. Notes Theor. Comput. Sci. 174(9), 5–22 (2007)

    Article  Google Scholar 

  17. Schulz, M., de Supinski, B.R.: PNMPI tools a whole lot greater than the sum of their parts. In: Supercomputing 2007 (SC’07) (2007)

  18. Vetter, J., Chambreau, C.: mpiP: Lightweight, Scalable MPI Profiling (2005). http://www.llnl.gov/CASC/mpip/

  19. Nagel W.E., Arnold A., Weber M., Hoppe H.C., Solchenbach K.: VAMPIR: visualization and analysis of MPI resources. Supercomputer 12(1), 69–80 (1996)

    Google Scholar 

  20. Brunst, H., Kranzlmüller, D., Nagel, W.: Tools for scalable parallel program analysis—Vampir NG and DeWiz. In: The International Series in Engineering and Computer Science, vol. 777, pp. 92–102 (2005)

  21. Zaki O., Lusk E., Gropp W., Swider D.: Toward Scalable Performance Visualization with Jumpshot. Int. J. High Perform. Comput. Appl. 13(3), 277–288 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Hilbrich.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hilbrich, T., Müller, M.S. & Krammer, B. MPI Correctness Checking for OpenMP/MPI Applications. Int J Parallel Prog 37, 277–291 (2009). https://doi.org/10.1007/s10766-009-0099-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10766-009-0099-4

Keywords

Navigation