Skip to main content
Log in

A thread-level parallelization of pairwise additive potential and force calculations suitable for current many-core architectures

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

In molecular dynamics (MD) simulations, calculations of potentials and their derivatives by coordinate, i.e., forces, in a pairwise additive manner such as the Lennard–Jones interactions and a short-range part of the Coulombic interactions form the main part of arithmetic operations. It is essential to achieve high thread-level parallelization efficiency of these pairwise additive calculations of potentials and forces to use current supercomputers with many-core architectures effectively. In this paper, we propose four new thread-level parallelization algorithms for the pairwise additive potential and force calculations. We implement the four codes in a MD calculation code based on the fast multipole method. Performance benchmarks were taken on the FX100 supercomputer and Intel Xeon Phi coprocessor. The code succeeds in achieving high thread-level parallelization efficiency with 32 threads on the FX100 and up to 60 threads on the Xeon Phi.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Frenkel D, Smit B (2002) Understanding molecular simulation: from algorithms to applications, 2nd edn. Academic Press, New York

    MATH  Google Scholar 

  2. Tuckerman ME (2010) Statistical mechanics: theory and molecular simulation. Oxford University Press, New York

    MATH  Google Scholar 

  3. Ganesan V, Jayaraman A (2014) Theory and simulation studies of effective interactions, phase behavior and morphology in polymer nanocomposites. Soft Matter 10:13–38

    Article  Google Scholar 

  4. Slater AG, Cooper AI (2015) Function-led design of new porous materials. Science 348:988–997

    Article  Google Scholar 

  5. Pronk S, Pall S, Schulz R, Larsson P, Bjelkmar P, Apostolov R, Shirts MR, Smith JC, Kasson PM, van der Spoel D, Hess B, Lindahl E (2013) GROMACS 4.5: a high-throuhput and higly parallel open source molecular simulation toolkit. Bioinfomatics 29:845–854

    Article  Google Scholar 

  6. Plimpton S (1995) Fast parallel algorithms for short-range molecular dynamics. J Comp Phys 117:1–19

    Article  MATH  Google Scholar 

  7. Phillips JC, Braun R, Wang W, Gumbart J, Tajkhorshid E, Villa E, Chipot C, Skeel RD, Kale L, Schulten K (2005) Scalable molecular dynamics with NAMD. J Comput Chem 26:1781–1802

    Article  Google Scholar 

  8. Andoh Y, Yoshii N, Fujimoto K, Mizutani K, Kojima H, Yamada A, Okazaki S, Kawaguchi K, Nagao H, Iwahashi K, Mizutani F, Minami K, Ichikawa S, Komatsu H, Ishizuki S, Takeda Y, Fukushima M (2013) MODYLAS: a highly parallelized general-purpose molecular dynamics simulation program for large-scale systems with long range forces calculated by fast multipole method (FMM) and highly scalable fine-grained new parallel processing algorithms. J Chem Theory Comput 9:3201–3209

    Article  Google Scholar 

  9. Harrod WJ (2012) A journey to exascale computing. In: The International Conference for High Performance Computing, Networking, Strage and Analysis, SC12. https://science.energy.gov/~/media/ascr/ascac/pdf/reports/2013/SC12_Harrod.pdf

  10. White paper: FUJITSU Supercomputer PRIMEHPC FX100—Evolution to the Next Generation. https://www.fujitsu.com/global/Images/primehpc-fx100-hard-en.pdf

  11. MacKerell AD, Bashford D, Bellott M, Dunbrack RL, Evanseck JD, Field MF, Fischer S, Gao J, Guo H, Ha S, Joseph-McCarthy D, Kuchnir L, Kuczera K, Lau FT, Mattos C, Michnick S, Ngo T, Nguyen DT, Prodhom B, Reiher WE, Roux B, Schlenkrich M, Smith JC, Stote R, Straub J, Watanabe M, Wiorkiewicz-Kuczera J, Yin D, Karplus M (1998) All-atom empirical potential for molecular modeling and dynamics studies of proteins. J Phys Chem B 102:3586–3616

    Article  Google Scholar 

  12. Jorgensen WL, Maxwell DS, Tirado-Rives J (1996) Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J Am Chem Soc 118:11225–11236

    Article  Google Scholar 

  13. Cornell WD, Cieplak P, Bayly CI, Gould IR, Merz KM, Ferguson DM, Spellmeyer DC, Fox T, Caldwell W, Kollman PA (1995) A second generation force field for the simulation of proteins, nucleic acids, and organic molecules. J Am Chem Soc 117:5179–5197

    Article  Google Scholar 

  14. Wong-ekkabut J, Karttunen M (2016) The good, the bad and the user in soft matter simulations. Biochim Biophys Acta 1858:2529–2538

    Article  Google Scholar 

  15. Ewald P (1921) Die Berechnung optischer und elektrostatischer Gitterpotentiale. Ann Phys 64:253–287

    Article  MATH  Google Scholar 

  16. Essmann U, Perera L, Berkowitz ML, Darden T, Lee H, Pedersen LG (1995) A smooth particle mesh Ewald method. J Chem Phys 103:8577–8593

    Article  Google Scholar 

  17. Greengard LF (1988) The rapid evaluation of potential fields in particle systems. MIT Press, Cambridge

    MATH  Google Scholar 

  18. Figueirido F, Levy RM, Zhou R, Berne BJ (1997) Large scale simulation of macromolecules in solution: combining the periodic fast multiple method with multiple time step integrators. J Chem Phys 106:9835–9849

    Article  Google Scholar 

  19. Yoshida T, Hondou M, Tabata T, Kan R, Kiyota N, Kojima H, Hosoe K, Okano H (2015) Sparc64 XIfx: Fujitsu’s next-generation processor for high-performance computing. IEEE Micro 35:6–14

    Article  Google Scholar 

  20. Durell SR, Brooks BR, Ben-Naim A (1994) Solvent induced forces between two hydrophilic groups. J Phys Chem 98:2198–2202

    Article  Google Scholar 

  21. Birdsall CK (1991) Particle-in-cell charged-particle simulations, plus Monte–Carlo collisions with neutral atoms, PIC-MCC. IEEE Trans Plasma Sci 19:65–85

    Article  Google Scholar 

  22. Hoogerbrugge PJ, Koelman MVA (1992) Simulating microscopic phenomena with dissipative particle dynamics. Europhys Lett 19:155–160

    Article  Google Scholar 

  23. Monaghan JJ (1992) Smoothed particle hydrodynamics. Annu Rev Astron Astrophys 30:543–574

    Article  Google Scholar 

  24. Springel V, Hernquist L (2003) Cosmological smoothed particle hydrodynamics simulations: a hybrid multiphase model for star formation. Mon Not R Astron Soc 339:289–311

    Article  Google Scholar 

  25. Cundall PA (1971) A computer model for simulating progressive large scale movements in blocky rock Systems. In: Proceedings of the Symposium of the International Society of Rock Mechanics, Nancy, France

  26. Cundall PA, Strack ODL (1979) A discrete numerical model for granular assemblies. Geotechnique 29:47–65

    Article  Google Scholar 

Download references

Acknowledgements

We thank Dr. Y. Komura for valuable suggestions to code 1. This work is supported by “Joint Usage/Research Center for Interdisciplinary Large-Scale Information Infrastructures” and “High Performance Computing Infrastructure” in Japan (Project ID jh150015-NA11, jh160040-NAJ, and jh170024-NAH). This work is also supported by the FLAGSHIP2020, MEXT within the priority study5:Development of new fundamental technologies for high-efficiency energy creation, conversion/storage and use (Proposal No. hp170241). This work is partially funded by MEXT’s program for the Development and Improvement for the Next Generation Ultra High-Speed Computer System, under its Subsidies for Operating the Specific Advanced Large Research Facilities (S. S.). Benchmark calculations were taken at the Information Technology Center (ITC) of Nagoya University, and at the ITC of The University of Tokyo. This work is also supported by JSPS KAKENHI Grant Number 16K21094 (Y. A.), and by MEXT KAKENHI Grant No. 26410012 (N. Y.).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshimichi Andoh.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 78 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andoh, Y., Suzuki, S., Ohshima, S. et al. A thread-level parallelization of pairwise additive potential and force calculations suitable for current many-core architectures. J Supercomput 74, 2449–2469 (2018). https://doi.org/10.1007/s11227-018-2272-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-018-2272-2

Keywords

Navigation