skip to main content
10.1145/1362622.1362663acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

PNMPI tools: a whole lot greater than the sum of their parts

Published:10 November 2007Publication History

ABSTRACT

PNMPI extends the PMPI profiling interface to support multiple concurrent PMPI-based tools by enabling users to assemble tool stacks. We extend this basic concept to include new services for tool interoperability and to switch between tool stacks dynamically. This allows PNMPI to support modules that virtualize MPI execution environments within an MPI job or that restrict the application of existing, unmodified tools to a dynamic subset of MPI calls or even call sites.

Further, we extend PNMPI to platforms without dynamic linking, such as BlueGene/L, and we introduce an extended performance model along with experimental data from microbenchmarks to show that the performance overhead on any platform is negligible. More importantly, we provide significant new MPI tool components that are sufficient to compose interesting MPI tools. We present three detailed PNMPI usage scenarios that demonstrate that it significantly simplifies the creation of application-specific tools.

References

  1. Accelerated Strategic Computing Initiative. The ASCI sweep3d benchmark code. http://www.llnl.gov/asci_benchmarks/asci/limited/sweep3d/, December 1995.Google ScholarGoogle Scholar
  2. R. Bell, A. Malony, and S. Shende. ParaProf: A Portable, Extensible, and Scalable Tool for Parallel Performance Profile Analysis. In Proceedings of the International Conference on Parallel and Distributed Computing (Euro-Par 2003), pages 17--26, August 2003.Google ScholarGoogle ScholarCross RefCross Ref
  3. H. Brunst, D. Kranzlmüller, and W. Nagel. Tools for Scalable Parallel Program Analysis - Vampir NG and DeWiz. The International Series in Engineering and Computer Science, Distributed and Parallel Systems, 777:92--102, 2005.Google ScholarGoogle Scholar
  4. R. D. Falgout and U. M. Yang. hypre: a Library of High Performance Preconditioners. In Proceedings of the International Conference on Computational Science (ICCS), Part III, LNCS vol. 2331, pages 632--641, April 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. F. Gygi, E. W. Draeger, M. Schulz, B. R. de Supinski, J. A. Gunnels, V. Austel, J. C. Sexton, F. Franchetti, S. Kral, J. Lorenz, and C. W. Überhuber. Large-Scale Electronic Structure Calculations of High-Z Metals on the BlueGene/L Platform. In Proceedings of IEEE/ACM Supercomputing '06, November 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. B. Krammer, M. Müller, and M. Resch. Runtime Checking of MPI Applications with MARMOT. In Mini-Symposium on Tools Support for Parallel Programming at ParCo 2005, September 2005.Google ScholarGoogle Scholar
  7. G. Kumfert, J. Leek, and T. Epperly. Babel remote method invocation. In Proceedings of the 21st International Parallel and Distributed Processing Symposium, March 2007.Google ScholarGoogle ScholarCross RefCross Ref
  8. T. Ludwig, R. Wismüller, V. Sunderam, and A. Bode. OMIS --- On-line Monitoring Interface Specifi cation (Version 2.0), volume 9 of LRR-TUM Research Report Series. Shaker Verlag, Aachen, Germany, 1997. ISBN 3-8265-3035-7.Google ScholarGoogle Scholar
  9. J. May, D. Jefferson, N. Barton, R. Becker, J. Knap, G. Kumfert, J. Leek, and J. Tannahill. Introducing Cooperative Parallelism. Presented at the CCA Forum, presentation available at http://www.cca-forum.org/download/mtg/2007--01/may-coop-cca.ppt, January 2007.Google ScholarGoogle Scholar
  10. National Center for Atmospheric Research (NCAR). Community Climate System Model (CCSM). http://www.ccsm.ucar.edu/, 2006.Google ScholarGoogle Scholar
  11. M. Schulz and Bronis R. de Supinski. A Flexible and Dynamic Infrastructure for MPI Tool Interoperability. In Proceedings of the 2006 International Conference on Parallel Processing, August 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. S. Vetter and C. Chambreau. mpiP: Lightweight, Scalable MPI Profiling. http://www.llnl.gov/CASC/mpip/, April 2005.Google ScholarGoogle Scholar
  13. J. S. Vetter and B. R. de Supinski. Dynamic software testing of mpi applications with umpire. In Proceedings of IEEE/ACM Supercomputing '00, November 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Vuduc, M. Schulz, D. Quinlan, B. de Supinski, and A. Sæbjørnsen. Improving Distributed Memory Applications Testing By Message Perturbation. In Proceedings of Parallel and Distributed Systems: Testing and Debugging (PADTAD), July 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Wismüller. Interoperable Laufzeit-Werkzeuge für parallele und verteilte Systeme. Inaugural dissertation (Habilitation), Fakultät für Informatik, Technische Universität München, München, Germany, August 2001.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SC '07: Proceedings of the 2007 ACM/IEEE conference on Supercomputing
    November 2007
    723 pages
    ISBN:9781595937643
    DOI:10.1145/1362622

    Copyright © 2007 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 10 November 2007

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    SC '07 Paper Acceptance Rate54of268submissions,20%Overall Acceptance Rate1,516of6,373submissions,24%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader