skip to main content
10.1145/3437359.3465566acmconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article

Adaptive Plasma Physics Simulations: Dealing with Load Imbalance using Charm++

Authors Info & Claims
Published:17 July 2021Publication History

ABSTRACT

High Performance Computing (HPC) is nearing the exascale era and several challenges have to be addressed in terms of application development. Future parallel programming models should not only help developers take full advantage of the underlying machine but they should also account for highly dynamic runtime conditions, including frequent hardware failures. In this paper, we analyze the porting process of a plasma confinement simulator from a traditional MPI+OpenMP approach to a parallel objects based model like Charm++. The main driver for this effort is the existence of load imbalanced input scenarios that pure OpenMP scheduling can not solve. By using Charm++ adaptive runtime and integrated balancing strategies, we were able to increase total CPU usage from 45.2% to 80.2%, achieving a 1.64 × acceleration, after load balancing, over the MPI+OpenMP implementation on a specific input scenario. Checkpointing was added to the simulator thanks to the pack-unpack interface implemented by Charm++, providing scientists with fault tolerance and split execution capabilities.

References

  1. Jack Dongarra, Pete Beckman, Terry Moore, Patrick Aerts, Giovanni Aloisio, Jean-Claude Andre, David Barkai, Jean-Yves Berthou, Taisuke Boku, Bertrand Braunschweig, 2011. The international exascale software project roadmap. The international journal of high performance computing applications 25 (2011).Google ScholarGoogle Scholar
  2. Al Geist and Robert Lucas. 2009. Major computer science challenges at exascale. The International Journal of High Performance Computing Applications (2009).Google ScholarGoogle Scholar
  3. Steven P Hirshman and JC Whitson. 1983. Steepest-descent moment method for three-dimensional magnetohydrodynamic equilibria. The Physics of fluids 26, 12 (1983), 3553–3568.Google ScholarGoogle ScholarCross RefCross Ref
  4. Chao Huang, Orion Lawlor, and L. V. Kalé. 2003. Adaptive MPI. In Proceedings of the 16th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2003), LNCS 2958. College Station, Texas, 306–322.Google ScholarGoogle Scholar
  5. Kaname Ikeda. 2009. ITER on the road to fusion energy. Nuclear Fusion 50, 1 (2009), 014002.Google ScholarGoogle ScholarCross RefCross Ref
  6. Diego Jiménez, Luis Campos-Duarte, Ricardo Solano-Piedra, Luis Alonso Araya-Solano, Esteban Meneses, and Iván Vargas. 2020. BS-SOLCTRA: Towards a Parallel Magnetic Plasma Confinement Simulation Framework for Modular Stellarator Devices. In High Performance Computing, Juan Luis Crespo-Mariño and Esteban Meneses-Rojas (Eds.). Springer International Publishing, Cham, 33–48.Google ScholarGoogle Scholar
  7. L.V. Kalé and S. Krishnan. 1993. CHARM++: A Portable Concurrent Object Oriented System Based on C++. In Proceedings of OOPSLA’93, A. Paepcke (Ed.). ACM Press, 91–108.Google ScholarGoogle Scholar
  8. Alf Köhn. 2010. Investigation of microwave heating scenarios in the magnetically confined low-temperature plasma of the stellarator TJ-K. (2010).Google ScholarGoogle Scholar
  9. Samuel Lazerson, John Schmitt, Caoxiang Zhu, Joshua Breslau, STELLOPT Developers, 2020. Stellopt. Technical Report. Princeton Plasma Physics Lab.(PPPL), Princeton, NJ (United States).Google ScholarGoogle Scholar
  10. Robert Lucas, James Ang, Keren Bergman, Shekhar Borkar, William Carlson, Laura Carrington, George Chiu, Robert Colwell, William Dally, Jack Dongarra, 2014. DoE Advanced Scientific Computing Advisory Subcommittee (ascac) Report: Top Ten Exascale Research Challenges. Technical Report. US DOE Office of Science (SC)(United States).Google ScholarGoogle Scholar
  11. Peter Merkel. 1987. Solution of stellarator boundary value problems with external currents. Nuclear Fusion 27, 5 (1987), 867.Google ScholarGoogle ScholarCross RefCross Ref
  12. Daniel A Reed and Jack Dongarra. 2015. Exascale computing and big data. Commun. ACM 58, 7 (2015), 56–68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Francesco Romanelli, P Barabaschi, D Borba, G Federici, L Horton, R Neu, D Stork, and H Zohm. 2012. Fusion Electricity: A roadmap to the realization of fusion energy. (2012).Google ScholarGoogle Scholar
  14. Thomas Rummel, Konrad Risse, Michael Nagel, Thomas Mönnich, Frank Füllenbach, and Hans-Stephan Bosch. 2018. Challenges for the Wendelstein 7-X magnet systems during the next operation phase. IEEE Transactions on Plasma Science 46, 5 (2018), 1517–1521.Google ScholarGoogle ScholarCross RefCross Ref
  15. Thorsten von Eicken, David E. Culler, Seth Copen Goldstein, and Klaus Erik Schauser. 1992. Active Messages: A Mechanism for Integrated Communication and Computation. In Proceedings of the 19th Annual International Symposium on Computer Architecture(Queensland, Australia) (ISCA ’92). Association for Computing Machinery, New York, NY, USA, 256–266.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Adaptive Plasma Physics Simulations: Dealing with Load Imbalance using Charm++
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          PEARC '21: Practice and Experience in Advanced Research Computing
          July 2021
          310 pages
          ISBN:9781450382922
          DOI:10.1145/3437359

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 July 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate133of202submissions,66%

          Upcoming Conference

          PEARC '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format