skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Understanding the use of message passing interface in exascale proxy applications

Journal Article · · Concurrency and Computation. Practice and Experience
DOI:https://doi.org/10.1002/cpe.5901· OSTI ID:1860774
ORCiD logo [1];  [2];  [2];  [3];  [4];  [4]
  1. Auburn Univ., AL (United States)
  2. Univ. of Tennessee, Chattanooga, TN (United States)
  3. Univ. of Alabama, Birmingham, AL (United States)
  4. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing

Summary The Exascale Computing Project (ECP) focuses on the development of future exascale‐capable applications. Most ECP applications use the message passing interface (MPI) as their parallel programming model with mini‐apps serving as proxies. This paper explores the explicit usage of MPI in such ECP proxy applications. We empirically analyze 14 proxy applications from the ECP Proxy Apps Suite. We use the MPI profiling interface (PMPI) to collect MPI usage patterns in ECP proxy apps. Our analysis shows that a small subset of features from MPI is commonly used in the proxies of exascale‐capable applications, even when they reference third‐party libraries. This study is intended to provide a better understanding of the use of MPI in current exascale applications. The findings can help focus software investments made for exascale systems in the MPI middleware including optimization, fault‐tolerance, tuning, and hardware‐offload.

Research Organization:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
Grant/Contract Number:
AC52-07NA27344; CCF-1562659; CCF-1562306; CCF-1617690; CCF-1822191; CCF-1821431
OSTI ID:
1860774
Alternate ID(s):
OSTI ID: 1651212
Report Number(s):
LLNL-JRNL-766480; 956866
Journal Information:
Concurrency and Computation. Practice and Experience, Vol. 33, Issue 14; ISSN 1532-0626
Publisher:
WileyCopyright Statement
Country of Publication:
United States
Language:
English
Citation Metrics:
Cited by: 9 works
Citation information provided by
Web of Science

References (15)

Mpi on Millions of Cores journal March 2011
Overview of the MPI-IO Parallel I/O Interface book January 1996
Characterization of MPI Usage on a Production Supercomputer conference November 2018
MPI Stages: Checkpointing MPI State for Bulk Synchronous Applications
  • Sultana, Nawrin; Skjellum, Anthony; Laguna, Ignacio
  • EuroMPI'18: 25th European MPI Users' Group Meeting, Proceedings of the 25th European MPI Users' Group Meeting https://doi.org/10.1145/3236367.3236385
conference September 2018
Communication characteristics of large-scale scientific applications for contemporary cluster architectures conference January 2002
The Tau Parallel Performance System journal May 2006
An Empirical Performance Evaluation of Scalable Scientific Applications conference January 2002
Sparse collective operations for MPI conference May 2009
Architectural requirements and scalability of the NAS parallel benchmarks
  • Wong, Frederick C.; Martin, Richard P.; Arpaci-Dusseau, Remzi H.
  • Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM) - Supercomputing '99 https://doi.org/10.1145/331532.331573
conference January 1999
Implications of application usage characteristics for collective communication offload journal January 2006
hypre: A Library of High Performance Preconditioners
  • Falgout, Robert D.; Yang, Ulrike Meier; Goos, Gerhard
  • Computational Science — ICCS 2002: International Conference Amsterdam, The Netherlands, April 21–24, 2002 Proceedings, Part III https://doi.org/10.1007/3-540-47789-6_66
book April 2002
The SPLASH-2 programs: characterization and methodological considerations journal May 1995
Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation book January 2004
24/7 Characterization of petascale I/O workloads conference August 2009
A high-performance, portable implementation of the MPI message passing interface standard journal September 1996

Similar Records

Failure recovery for bulk synchronous applications with MPI stages
Journal Article · Wed Feb 27 00:00:00 EST 2019 · Parallel Computing · OSTI ID:1860774

MPI Session: External Network Transport Implementation (V.1.0)
Technical Report · Tue Sep 01 00:00:00 EDT 2020 · OSTI ID:1860774

A Survey of MPI Usage in the U.S. Exascale Computing Project
Technical Report · Fri Jun 01 00:00:00 EDT 2018 · OSTI ID:1860774