Performance evaluation of explicit finite difference algorithms with varying amounts of computational and memory intensity

https://doi.org/10.1016/j.jocs.2016.10.015Get rights and content
Under a Creative Commons license
open access

Highlights

  • Architectures designed for exascale performance motivate novel algorithmic changes.

  • Algorithms of varying degrees of memory and computational intensity are evaluated.

  • Automated code generation facilitates such algorithmic changes.

  • Storing some of the evaluated derivatives as local variables is shown to be optimal.

  • The optimal algorithm is about two times faster than the baseline algorithm.

Abstract

Future architectures designed to deliver exascale performance motivate the need for novel algorithmic changes in order to fully exploit their capabilities. In this paper, the performance of several numerical algorithms, characterised by varying degrees of memory and computational intensity, are evaluated in the context of finite difference methods for fluid dynamics problems. It is shown that, by storing some of the evaluated derivatives as single thread- or process-local variables in memory, or recomputing the derivatives on-the-fly, a speed-up of ∼2 can be obtained compared to traditional algorithms that store all derivatives in global arrays.

Keywords

Computational fluid dynamics
Finite difference methods
Algorithms
Exascale
Parallel computing
Performance

Cited by (0)

Christian T. Jacobs is a Research Fellow in the Aerodynamics and Flight Mechanics Group at the University of Southampton.

Satya P. Jammy is a Research Fellow in the Aerodynamics and Flight Mechanics Group at the University of Southampton.

Neil D. Sandham is Professor of Aerospace Engineering in the Aerodynamics and Flight Mechanics Group at the University of Southampton.