skip to main content
10.1145/3394277.3401855acmconferencesArticle/Chapter ViewAbstractPublication PagespascConference Proceedingsconference-collections
research-article

A Smoothed Particle Hydrodynamics Mini-App for Exascale

Published: 29 June 2020 Publication History

Abstract

The Smoothed Particles Hydrodynamics (SPH) is a particle-based, meshfree, Lagrangian method used to simulate multidimensional fluids with arbitrary geometries, most commonly employed in astrophysics, cosmology, and computational fluid-dynamics (CFD). It is expected that these computationally-demanding numerical simulations will significantly benefit from the up-and-coming Exascale computing infrastructures, that will perform 1018 FLOP/s. In this work, we review the status of a novel SPH-EXA mini-app, which is the result of an interdisciplinary co-design project between the fields of astrophysics, fluid dynamics and computer science, whose goal is to enable SPH simulations to run on Exascale systems. The SPH-EXA mini-app merges the main characteristics of three state-of-the-art parent SPH codes (namely ChaNGa, SPH-flow, SPHYNX) with state-of-the-art (parallel) programming, optimization, and parallelization methods. The proposed SPH-EXA mini-app is a C++14 lightweight and flexible header-only code with no external software dependencies. Parallelism is expressed via multiple programming models, which can be chosen at compilation time with or without accelerator support, for a hybrid process+thread+accelerator configuration. Strong- and weak-scaling experiments on a production supercomputer show that the SPH-EXA mini-app can be efficiently executed with up 267 million particles and up to 65 billion particles in total on 2,048 hybrid CPU-GPU nodes.

References

[1]
R. M. Cabezón, D. García-Senz, and J. Figueira, SPHYNX: An accurate density-based SPH method for astrophysical applications, A&A, 606:A78, Oct. 2017.
[2]
M. Tremmel, M. Karcher, F. Governato, M. Volonteri, T. R. Quinn, A. Pontzen, L. Anderson, and J. Bellovary, The Romulus cosmological simulations: a physical approach to the formation, dynamics and accretion models of SMBHs, MNRAS, 470, Sep. 2017.
[3]
G. Oger, D. Le Touzé, D. Guibert, M. de Leffe, J. Biddiscombe, J. Soumagne, J-G. Piccinali, On distributed memory MPI-based parallelization of SPH codes in massive HPC context, Computer Physics Communications, 200:1--14, March 2016.
[4]
A. Colagrossi, A meshless Lagrangian method for free-surface and interface flows with fragmentation, PhD thesis, Università di Roma, 2005.
[5]
Florina M. Ciorba, Lucio Mayer, Rubén M. Cabezón, and David Imbert, SPH-EXA: Optimizing Smoothed Particle Hydrodynamics for Exascale Computing, www.pascch.org/projects/2017-2020/sph-exa/.
[6]
M. Liebendörfer, M. Rampp, H.-Th. Janka, and A. Mezzacappa, Supernova Simulations with Boltzmann Neutrino Transport: A Comparison of Methods, The Astrophysical Journal, Volume 620, Number 2, 2005.
[7]
O. Agertz, B. Moore, J. Stadel, D. Potter, F. Miniati, J. Read, L. Mayer, A. Gawryszczak, A. Kravtsov, A. Nordlund, F. Pearce, V. Quilis, D. Rudd, V. Springel, J. Stone, E. Tasker, R. Teyssier, J. Wadsley, and R. Walder, Fundamental differences between SPH and grid methods. Monthly Notices of the Royal Astronomical Society, 380: 963--978.
[8]
R. F. Barret, C. T. Vaughan, and M. A. Heroux. MiniGhost: A mini-app for exploring boundary exchange strategies using stencil computations in scientific parallel computing. Technical report No. SAND2012-2437, Sandia National Laboratories, 2012. www.sandia.gov/~rfbarre/PAPERS/MG-SAND2012-2437.pdf.
[9]
M. A. Heroux, D. W. Doerfler, P. S. Crozier, J. M. Willenbring, H. C. Edwards, A. Williams, M. Rajan, E. R. Keiter, H. K. Thornquist, and R. W. Numrich. Improving performance via mini-applications. Technical report No. SAND2009-5574, Sandia National Laboratories, 2009. mantevo.org/MantevoOverview.pdf.
[10]
E.J. Tasker, R. Brunino, N. L. Mitchell, D. Michielsen, S. Hopton, F. R. Pearce, G. L. Bryan, and T. Theuns, A test suite for quantitative comparison of hydrodynamic codes in astrophysics. Monthly Notices of the Royal Astronomical Society, 390: 1267--1281.
[11]
R. M. Cabezón, K-C. Pan, M. Liebendörfer, T. Kuroda, K. Ebinger, O. Heinimann, F-K. Thielemann, and A. Perego, Core-collapse supernovae in the hall of mirrors. A three-dimensional code-comparison project. Astron.Astrophys. 619 (2018) A118.
[12]
A. Stone, J.M. Dennis, M. Mills Strout. The CGPOP Miniapp, version 1.0. Technical Report CS-11-103, Colorado State University, 2011.
[13]
R. Marcus. MCMini: Monte Carlo on GPGPU. Technical Report LA-UR-12-23206, Los Alamos National Laboratory, 2012.
[14]
T. C. Team. The CESAR codesign center: Early results. Technical report, Argonne National Laboratory, 2012.
[15]
P. Bauer, N. Wedi, and W. Deconinck. ESCAPE: Energy-efficient scalable algorithms for weather prediction at Exascale. EU Horizon 2020.
[16]
W. Deconinck, P. Bauer, M. Diamantakis, M. Hamrud, C. Kühnlein, P. Maciel, G. Mengaldo, T. Quintino, B. Raoult, P. K. Smolarkiewicz, and N. P. Wedi. Atlas: A library for numerical weather prediction and climate modelling. Computer Phys. Comm., 220:188--204, 2017.
[17]
T. C. Schulthess. Programming revisited. Nature Physics, 11(5):369--373, may 2015.
[18]
O. Messer, E. D'Azevedo, J. Hill, W. Joubert, S. Laosooksathit, and A. Tharrington. Developing MiniApps on modern platforms using multiple programming models. In 2015 IEEE International Conference on Cluster Computing. IEEE, sep 2015.
[19]
S. Rosswog, Astrophysical smooth particle hydrodynamics, New Astronomy Reviews, Volume 53, Issues 4--6, 2009, Pages 78--104, ISSN 1387-6473, https://doi.org/10.1016/j.newar.2009.08.007.
[20]
J. J. Monaghan, and R. A. Gringold, Shock Simulation by the Particle Method SPH, Journal of Computational Physics, Volume 52, Issue 2, p. 374--389, 1983.
[21]
OpenMP 4.5 Specifications. https://www.openmp.org/specifications/, 2018.
[22]
J. Reinders. Intel Threading Building Block: Outfitting C++ for Multi-core Processor Parallelism. O'Reilly Media, 2007. ISBN:9780596514808.
[23]
A. D. Robison, Composable Parallel Patterns with Intel Cilk Plus. Computing in Science & Engineering, vol. 15, no. 2, pp. 66--71, March-April 2013.
[24]
B. L. Chamberlain et al., Parallel Programmability and the Chapel Language. The International Journal of High Performance Computing Applications, vol. 21, no. 3, Aug. 2007, pp. 291--312.
[25]
H. Carter Edwards, C. R. Trott and D. Sunderland, Kokkos. Journal of Parallel and Distributed Computing, vol. 74, no. 12, Dec. 2014, pp. 3202--3216.
[26]
H. Kaiser, T. Heller, B. Adelstein-Lelbach, A. Serio and D. Fey. HPX: A Task Based Programming Model in a Global Address Space. In Proceedings of the 8th International Conference on Partitioned Global Address Space Programming Models (PGAS '14). ACM, New York, NY, USA, 2014.
[27]
R. M. Cabezón, D. Garcia-Senz, A. Relaño, A one-parameter family of interpolating kernels for Smoothed Particle Hydrodynamics studies, J. Comput. Phys., 227:8523--8540, 2008.
[28]
D. García-Senz, R. M. Cabezón and J. A. Escartín, Improving smoothed particle hydrodynamics with an integral approach to calculating gradients, A&A, 538, A9, February 2012.
[29]
Aurélien Cavelan, Rubén M. Cabezón and Florina M. Ciorba, Detection of Silent Data Corruptions in Smoothed Particle Hydrodynamics, 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2019, Larnaca, Cyprus, May 14--17, 2019.

Cited By

View all
  • (2024)Smoothed particle hydrodynamics implementation of the standard viscous–plastic sea-ice model and validation in simple idealized experimentsThe Cryosphere10.5194/tc-18-1013-202418:3(1013-1032)Online publication date: 4-Mar-2024
  • (2024)Multi-level Load Balancing Strategies for Massively Parallel Smoothed Particle Hydrodynamics SimulationProceedings of the 53rd International Conference on Parallel Processing10.1145/3673038.3673090(400-410)Online publication date: 12-Aug-2024
  • (2024)Increasing Energy Efficiency of Astrophysics Simulations Through GPU Frequency ScalingProceedings of the SC '24 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis10.1109/SCW63240.2024.00229(1826-1834)Online publication date: 17-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
PASC '20: Proceedings of the Platform for Advanced Scientific Computing Conference
June 2020
169 pages
ISBN:9781450379939
DOI:10.1145/3394277
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 June 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Exascale
  2. SPH
  3. algorithms
  4. mini-app
  5. parallelization
  6. performance

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

PASC '20
Sponsor:

Acceptance Rates

PASC '20 Paper Acceptance Rate 16 of 36 submissions, 44%;
Overall Acceptance Rate 109 of 221 submissions, 49%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)68
  • Downloads (Last 6 weeks)15
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Smoothed particle hydrodynamics implementation of the standard viscous–plastic sea-ice model and validation in simple idealized experimentsThe Cryosphere10.5194/tc-18-1013-202418:3(1013-1032)Online publication date: 4-Mar-2024
  • (2024)Multi-level Load Balancing Strategies for Massively Parallel Smoothed Particle Hydrodynamics SimulationProceedings of the 53rd International Conference on Parallel Processing10.1145/3673038.3673090(400-410)Online publication date: 12-Aug-2024
  • (2024)Increasing Energy Efficiency of Astrophysics Simulations Through GPU Frequency ScalingProceedings of the SC '24 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis10.1109/SCW63240.2024.00229(1826-1834)Online publication date: 17-Nov-2024
  • (2024)Scalable In-Situ Visualization for Extreme-Scale SPH SimulationsProceedings of the SC '24 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis10.1109/SCW63240.2024.00121(853-858)Online publication date: 17-Nov-2024
  • (2024)InsitUE - Enabling Hybrid In-situ Visualizations Through Unreal Engine and CatalystHigh Performance Computing. ISC High Performance 2024 International Workshops10.1007/978-3-031-73716-9_33(469-481)Online publication date: 14-Dec-2024
  • (2023)Accurate Measurement of Application-level Energy Consumption for Energy-Aware Large-Scale SimulationsProceedings of the SC '23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis10.1145/3624062.3624272(1881-1884)Online publication date: 12-Nov-2023
  • (2023)Application Experiences on a GPU-Accelerated Arm-based HPC TestbedProceedings of the HPC Asia 2023 Workshops10.1145/3581576.3581621(35-49)Online publication date: 27-Feb-2023
  • (2023)SWSPH: A Massively Parallel SPH Implementation for Hundred-Billion-Particle Simulation on New Sunway SupercomputerEuro-Par 2023: Parallel Processing10.1007/978-3-031-39698-4_38(564-577)Online publication date: 24-Aug-2023
  • (2023)Performance Evaluation of a Next-Generation SX-Aurora TSUBASA Vector SupercomputerHigh Performance Computing10.1007/978-3-031-32041-5_19(359-378)Online publication date: 10-May-2023
  • (2022)Online Thread Auto-Tuning for Performance Improvement and Resource SavingIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2022.316941033:12(3746-3759)Online publication date: 1-Dec-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media