Elsevier

Computer Physics Communications

Volume 207, October 2016, Pages 186-192
Computer Physics Communications

Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

https://doi.org/10.1016/j.cpc.2016.06.003Get rights and content

Abstract

Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

Introduction

Focusing of shock waves can generate extreme conditions, such as high pressure and temperature, at the focal region. Shock focusing occurs in a variety of man-made and naturally occurring events, for example in extracorporeal shock wave lithotripsy  [1], inertial confinement fusion  [2] and collapse of cavitation bubbles  [3]. The first researcher to study shock focusing was Guderley  [4], who developed analytical solutions for converging cylindrical and spherical shock waves. Following his work, numerous authors have investigated on this area. Basically, there are two major numerical methods used to study shock wave propagation: one is the Navier–Stokes equations, and the other is the inviscid Euler equations if viscosity in the shocked medium can be neglected. The advantage of these two methods is that a full flow field can be accurately obtained. However, the strength of the shock front in the focusing area can be much higher than that of the initial shock front, resulting in smaller time scales to maintain the Courant–Friedrichs–Lewy (CFL) condition. In addition, the length of the shock front in the focusing area can be much smaller than that of the initial shock front. In order to resolve all small scales close to, and in, the focal region, high resolution in both time and space are required, which can make the computational task very expensive. Whitham proposed an alternative method  [5], named Geometrical Shock Dynamics (GSD), that describes the motion of shock waves in a different way. Unlike the Navier–Stokes equations and the Euler equations, this theory avoids dealing with the flow field around the shock and only focuses on the curvature of the shock wave itself. As a result, solving a shock focusing event with GSD instead of the Navier–Stokes equations or the Euler equations, the computational complexity is reduced by solving a lower dimensional problem. In addition, the actual computational cost may be reduced by more than an order of magnitude depending on the required grid resolution when dealing with higher dimensional problems. In GSD, the shock front is discretized into small elements. Between each element, orthogonal trajectories are introduced as rays so that each shock front element can be approximated to propagate down a tube whose boundaries are defined by the rays, a so-called ray tube. The main assumption in GSD is that the motion of the shock only changes with the variation of the ray tube area. Then, instead of solving the full Euler equations, the motion of the shock can be predicted by deriving the relation between shock strength, which can be represented by the Mach number, M, and the area of the ray tube, A. This is the so-called Area-Mach number (AM) relation,1A(x)dAdM=g(M), where g(M)=MM21(1+2γ+11μ2μ)(1+2μ+1M2),μ2=(γ1)M2+22γM2(γ1). Here, γ represents the adiabatic index, x denotes the distance in the ray tube and the cross sectional area A(x) is a function of x, see Ref.  [6] for additional details.

There exists a variety of algorithms to implement GSD numerically. Here, only a part of those works are listed. The method of characteristics was used by Bryson and Gross  [7] to analyze shock diffractions. Decades later, a front tracking approach was developed by Henshaw  [8] in two dimensions and Schwendeman  [9] in three dimensions. A conservative finite difference method was formulated by Schwendeman  [10]. Most recently, a fast-marching like algorithm was developed by Noumir et al. [11]. Among all these algorithms, the front tracking method is the most used one due to its computational accuracy and simplicity of implementation. For example, Schwendeman applied this method to study shock motion in non-uniform media  [12]. Best modified it to investigate underwater explosions  [13], [14], and Apazidis and Lesser utilized it to compare with the shock converging experiments  [15]. However, in order to utilize the front tracking method to study shock motion for large scale applications, the computational cost is still a large bottleneck, which can be addressed through parallel computing techniques.

In our work, the front tracking method has been implemented on parallel computers and symmetric boundary conditions has been developed in order to reduce the computational expense dramatically.

Section snippets

Serial scheme

In this study, the numerical implementation of GSD is based on previous front tracking methods  [8], [13], [14]. A short review about implementation of the numerical scheme is given as follows. In a two dimensional condition, the shock front is discretized into N points denoted by xi(t), where i=1,2,,N. The shock front propagates along the direction of its normal vector, and the speed is determined by the AM relation. The motion of the shock is described by dxi(t)dt=a0Mi(t)ni(t),i=1,,N,

Benchmark

Performance analysis has been conducted for the two parallel schemes on the high performance computing facility (HPCC) at University of Southern California. The nodeset, HPCC sl250s, consists of 256 Dual-Octacore Intel Xeon CPUs operating at 2.4 GHz with 64 GB memory. A strong-scaling test has been performed for both schemes by simulating a decagon converging shock with a total number of 3242,911 discrete points. The initial conditions are the same as those described in Section  2.2. As

Conclusions

In this study, a spatial decomposition method to implement the GSD simulation on parallel computers was adopted. At first, performance profiling was conducted to analyze computational hot-spot in the serial scheme. Two schemes were designed to parallelize the simulation, in which MPI and OpenMP were employed. An efficient tridiagonal solver  [17] based on the SPIKE algorithm  [20], [21] was also incorporated into the parallel implementation to handle the most computationally expensive function.

References (21)

  • Y. Noumir et al.

    J. Comput. Phys.

    (2015)
  • E. Polizzi et al.

    Parallel Comput.

    (2006)
  • E. Polizzi et al.

    Comput. Fluids

    (2007)
  • A.G. Mulley

    N. Engl. J. Med.

    (1986)
  • C.K. Li et al.

    New J. Phys.

    (2013)
  • G. Iosilevskii et al.

    J. R. Soc. Interface

    (2008)
  • G. Guderley

    Luftfahrtforschung

    (1942)
  • G.B. Whitham

    J. Fluid Mech.

    (1957)
  • G.B. Whitham
    (2011)
  • A.E. Bryson et al.

    J. Fluid Mech.

    (1961)
There are more references available in the full text version of this article.

Cited by (5)

  • Fast and accurate adaptive finite difference method for dendritic growth

    2019, Computer Physics Communications
    Citation Excerpt :

    Finally, extending the present work toward graphic processing units (GPUs) [30,31], the effects of the applied temperature gradient [32], fluid flows [10,33], and parallel computing [34,35] are interesting near-future research directions.

  • Disturbance region update method for steady compressible flows

    2018, Computer Physics Communications
    Citation Excerpt :

    Parallel computing on the central processing unit (CPU) and the graphics processing unit has shown significant speedup in comparison with the sequential execution time on single CPU calculations [13]. Numerous efforts were accomplished to improve computational efficiency, e.g., parallel implementation for various schemes [13,14], for different grids [15,16], and for diverse flow problems [17,18], as well as strategies to optimizing parallelization [19]. The third class, which is rather intuitive, is drawn based on reducing the number of grid cells so as to reduce the computational effort per iteration.

View full text