skip to main content
10.1145/3615318.3615324acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

A Shim Layer for Transparently Adding Meta Data to MPI Handles

Published:21 September 2023Publication History

ABSTRACT

MPI tool or abstraction libraries often have the need to bind meta data information to different kinds of MPI opaque handles. Communicator, window and datatype handles allow to associate key-value pairs as a mean to store meta information. The more short-living request handles, however, do not provide such functionality. As a result, several tool libraries use map-like data structures to bind internal meta information to MPI opaque handles and track those handles across their lifetime, using the handle value as a key. This results in several challenges. In this paper we make the case that request handles associated with different concurrent communication operations are not guaranteed to be unique when returned from the MPI library, so that a simple map may result in conflicts. Furthermore, MPI handles are not guaranteed to be constant over their lifetime, making the use of a map even more questionable. In this work, we present a shim layer wrapping MPI opaque handles that is transparent to the application and that allows tool and abstraction libraries to uniquely distinguish semantically different handles. At the same time, the handle shim layer allows to store the meta information in the wrapped handle avoiding the map-like data structures and also sand-boxes the MPI handle so that changes over its lifetime do not cause any harm. We provide a thread-safe proof-of-concept implementation for most relevant MPI-4 functions that can be used with multi-threaded MPI applications. The implementation transparently supports the different underlying base-language types for handles chosen by the MPI implementations. We evaluate the integration into several tools and abstraction libraries. In an overhead evaluation on synthetic benchmarks, our handle shim layer reduces the runtime overhead compared to a map-like data structure by about in most single-threaded cases.

References

  1. Satish Balay, William D. Gropp, Lois Curfman McInnes, and Barry F. Smith. 1997. Efficient Management of Parallelism in Object Oriented Numerical Software Libraries. In Modern Software Tools in Scientific Computing. Birkhäuser Press.Google ScholarGoogle Scholar
  2. Markus Geimer, Felix Wolf, Brian JN Wylie, Erika Ábrahám, Daniel Becker, and Bernd Mohr. 2010. The Scalasca performance toolset architecture. Concurrency and computation: Practice and experience 22, 6 (2010).Google ScholarGoogle Scholar
  3. Richard L. Graham, Timothy S. Woodall, and Jeffrey M. Squyres. 2006. Open MPI: A Flexible High Performance MPI. In Parallel Processing and Applied Mathematics. Springer Berlin Heidelberg.Google ScholarGoogle Scholar
  4. William Gropp and Ewing L. Lusk. 1997. A High-Performance MPI Implementation on a Shared-Memory Vector Supercomputer. Parallel Comput. 22, 11 (1997). https://doi.org/10.1016/S0167-8191(96)00062-2Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Tobias Hilbrich, Joachim Protze, Martin Schulz, Bronis R. de Supinski, and Matthias S. Muller. 2012. MPI runtime error detection with MUST: Advances in deadlock detection. In SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. https://doi.org/10.1109/SC.2012.79Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Andreas Knüpfer, Holger Brunst, Jens Doleschal, Matthias Jurenz, Matthias Lieber, Holger Mickler, Matthias S Müller, and Wolfgang E Nagel. 2008. The Vampir performance analysis tool-set. In Tools for High Performance Computing: Proceedings of the 2nd International Workshop on Parallel Tools for High Performance Computing, July 2008, HLRS, Stuttgart. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  7. Andreas Knüpfer, Christian Rössel, Dieter an Mey, Scott Biersdorff, Kai Diethelm, Dominic Eschweiler, Markus Geimer, Michael Gerndt, Daniel Lorenz, Allen D. Malony, Wolfgang E. Nagel, Yury Oleynik, Peter Philippen, Pavel Saviankou, Dirk Schmidl, Sameer S. Shende, Ronny Tschüter, Michael Wagner, Bert Wesarg, and Felix Wolf. 2012. Score-P – A Joint Performance Measurement Run-Time Infrastructure for Periscope, Scalasca, TAU, and Vampir. In Tools for High Performance Computing 2011, Proc. of the 5th Parallel Tools Workshop, Dresden, Germany, September 2011. Springer. https://doi.org/10.1007/978-3-642-31476-6_7Google ScholarGoogle ScholarCross RefCross Ref
  8. Jaroslaw Nieplocha, Robert J. Harrison, and Richard J. Littlefield. 1996. Global Arrays: a nonuniform memory access programming model for high-performance computers. J. Supercomput. 10 (June 1996). Issue 2. http://portal.acm.org/citation.cfm?id=243179.243182Google ScholarGoogle Scholar
  9. Vincent Pillet, Jesús Labarta, Toni Cortés, and Sergi Girona. 1995. Paraver: A Tool to Visualize and Analyze Parallel Code. Transputer and occam Developments (April 1995). http://www.bsc.es/paraver.Google ScholarGoogle Scholar
  10. Joachim Protze, Marc-André Hermanns, Ali Demiralp, Matthias S. Müller, and Torsten Kuhlen. 2020. MPI Detach - Asynchronous Local Completion. In Proceedings of the 27th European MPI Users’ Group Meeting (Austin, TX, USA) (EuroMPI/USA ’20). ACM. https://doi.org/10.1145/3416315.3416323Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Joachim Protze, Fabian Orland, Kingshuk Haldar, Thore Koritzius, and Christian Terboven. 2022. On-the-Fly Calculation of Model Factors for Multi-paradigm Applications. In Euro-Par 2022: Parallel Processing - 28th International Conference on Parallel and Distributed Computing, Glasgow, UK, August 22-26, 2022, Proceedings(Lecture Notes in Computer Science, Vol. 13440). Springer. https://doi.org/10.1007/978-3-031-12597-3_5Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Martin Schulz and Bronis R. de Supinski. 2007. PMPI tools: a whole lot greater than the sum of their parts. In SC. ACM Press.Google ScholarGoogle Scholar
  13. Ioannis Vardas, Sascha Hunold, Jordy I. Ajanohoun, and Jesper Larsson Träff. 2022. mpisee: MPI Profiling for Communication and Communicator Structure. In IPDPS Workshops. IEEE.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Shim Layer for Transparently Adding Meta Data to MPI Handles

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        EuroMPI '23: Proceedings of the 30th European MPI Users' Group Meeting
        September 2023
        123 pages
        ISBN:9798400709135
        DOI:10.1145/3615318

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 September 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate66of139submissions,47%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format