Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves
Introduction
Focusing of shock waves can generate extreme conditions, such as high pressure and temperature, at the focal region. Shock focusing occurs in a variety of man-made and naturally occurring events, for example in extracorporeal shock wave lithotripsy [1], inertial confinement fusion [2] and collapse of cavitation bubbles [3]. The first researcher to study shock focusing was Guderley [4], who developed analytical solutions for converging cylindrical and spherical shock waves. Following his work, numerous authors have investigated on this area. Basically, there are two major numerical methods used to study shock wave propagation: one is the Navier–Stokes equations, and the other is the inviscid Euler equations if viscosity in the shocked medium can be neglected. The advantage of these two methods is that a full flow field can be accurately obtained. However, the strength of the shock front in the focusing area can be much higher than that of the initial shock front, resulting in smaller time scales to maintain the Courant–Friedrichs–Lewy (CFL) condition. In addition, the length of the shock front in the focusing area can be much smaller than that of the initial shock front. In order to resolve all small scales close to, and in, the focal region, high resolution in both time and space are required, which can make the computational task very expensive. Whitham proposed an alternative method [5], named Geometrical Shock Dynamics (GSD), that describes the motion of shock waves in a different way. Unlike the Navier–Stokes equations and the Euler equations, this theory avoids dealing with the flow field around the shock and only focuses on the curvature of the shock wave itself. As a result, solving a shock focusing event with GSD instead of the Navier–Stokes equations or the Euler equations, the computational complexity is reduced by solving a lower dimensional problem. In addition, the actual computational cost may be reduced by more than an order of magnitude depending on the required grid resolution when dealing with higher dimensional problems. In GSD, the shock front is discretized into small elements. Between each element, orthogonal trajectories are introduced as rays so that each shock front element can be approximated to propagate down a tube whose boundaries are defined by the rays, a so-called ray tube. The main assumption in GSD is that the motion of the shock only changes with the variation of the ray tube area. Then, instead of solving the full Euler equations, the motion of the shock can be predicted by deriving the relation between shock strength, which can be represented by the Mach number, , and the area of the ray tube, . This is the so-called Area-Mach number (–) relation, where Here, represents the adiabatic index, denotes the distance in the ray tube and the cross sectional area is a function of , see Ref. [6] for additional details.
There exists a variety of algorithms to implement GSD numerically. Here, only a part of those works are listed. The method of characteristics was used by Bryson and Gross [7] to analyze shock diffractions. Decades later, a front tracking approach was developed by Henshaw [8] in two dimensions and Schwendeman [9] in three dimensions. A conservative finite difference method was formulated by Schwendeman [10]. Most recently, a fast-marching like algorithm was developed by Noumir et al. [11]. Among all these algorithms, the front tracking method is the most used one due to its computational accuracy and simplicity of implementation. For example, Schwendeman applied this method to study shock motion in non-uniform media [12]. Best modified it to investigate underwater explosions [13], [14], and Apazidis and Lesser utilized it to compare with the shock converging experiments [15]. However, in order to utilize the front tracking method to study shock motion for large scale applications, the computational cost is still a large bottleneck, which can be addressed through parallel computing techniques.
In our work, the front tracking method has been implemented on parallel computers and symmetric boundary conditions has been developed in order to reduce the computational expense dramatically.
Section snippets
Serial scheme
In this study, the numerical implementation of GSD is based on previous front tracking methods [8], [13], [14]. A short review about implementation of the numerical scheme is given as follows. In a two dimensional condition, the shock front is discretized into points denoted by , where . The shock front propagates along the direction of its normal vector, and the speed is determined by the – relation. The motion of the shock is described by
Benchmark
Performance analysis has been conducted for the two parallel schemes on the high performance computing facility (HPCC) at University of Southern California. The nodeset, HPCC sl250s, consists of 256 Dual-Octacore Intel Xeon CPUs operating at 2.4 GHz with 64 GB memory. A strong-scaling test has been performed for both schemes by simulating a decagon converging shock with a total number of 3242,911 discrete points. The initial conditions are the same as those described in Section 2.2. As
Conclusions
In this study, a spatial decomposition method to implement the GSD simulation on parallel computers was adopted. At first, performance profiling was conducted to analyze computational hot-spot in the serial scheme. Two schemes were designed to parallelize the simulation, in which MPI and OpenMP were employed. An efficient tridiagonal solver [17] based on the SPIKE algorithm [20], [21] was also incorporated into the parallel implementation to handle the most computationally expensive function.
References (21)
- et al.
J. Comput. Phys.
(2015) - et al.
Parallel Comput.
(2006) - et al.
Comput. Fluids
(2007) N. Engl. J. Med.
(1986)- et al.
New J. Phys.
(2013) - et al.
J. R. Soc. Interface
(2008) Luftfahrtforschung
(1942)J. Fluid Mech.
(1957)- (2011)
- et al.
J. Fluid Mech.
(1961)
Cited by (5)
Fast and accurate adaptive finite difference method for dendritic growth
2019, Computer Physics CommunicationsCitation Excerpt :Finally, extending the present work toward graphic processing units (GPUs) [30,31], the effects of the applied temperature gradient [32], fluid flows [10,33], and parallel computing [34,35] are interesting near-future research directions.
Disturbance region update method for steady compressible flows
2018, Computer Physics CommunicationsCitation Excerpt :Parallel computing on the central processing unit (CPU) and the graphics processing unit has shown significant speedup in comparison with the sequential execution time on single CPU calculations [13]. Numerous efforts were accomplished to improve computational efficiency, e.g., parallel implementation for various schemes [13,14], for different grids [15,16], and for diverse flow problems [17,18], as well as strategies to optimizing parallelization [19]. The third class, which is rather intuitive, is drawn based on reducing the number of grid cells so as to reduce the computational effort per iteration.
Numerical and experimental evaluation of shock dividers
2022, Shock WavesEvaluation of shock dividers using numerical and experimental methods
2020, AIAA Scitech 2020 Forum