Abstract
Previously, large-scale fluid dynamics problem required supercomputers, such as the Cray, and took a long time to obtain a solution. Clustering technology has changed the world of the supercomputer and fluid dynamics. Affordable cluster computers have replaced the huge and expansive supercomputers in computational fluid dynamics (CFD) field in recent years. Even supercomputers are designed in the form of clusters based on high-performance servers. This paper describes the configuration of the affordable PC hardware cluster as well as the parallel computing performance using commercial CFD code in the developed cluster. A multi-core cluster using the Linux operating system was developed with affordable PC hardware and low-cost high-speed gigabit network switches instead of Myrinet or Infiniband. The PC cluster consisted of 52 cores and easily expandable up to 96 cores in the current configuration. For operating software, the Rock cluster package was installed in the master node to minimize the need for maintenance. This cluster was designed to solve large fluid dynamics and heat transfer problems in parallel. Using a commercial CFD package, the performance of the cluster was evaluated by changing the number of CPU cores involved in the computation. A forced convection problem around a linear cascade was solved using the CFX program, and the heat transfer coefficient along the surface of the turbine cascade was simulated. The mesh of the model CFD problem has 1.5 million nodes, and the steady computation was performed for 2,000 time-integrations. The computation results were compared with previously published heat transfer experimental results to check the reliability of the computation. A comparison of the simulation and experimental results showed good agreement. The performance of the designed PC cluster increased with increasing number of cores up to 16 cores The computation (elapsed) 16-core was approximately three times faster than that with a 4-core.
Similar content being viewed by others
References
Oracle Sun Grid Engine administration Guide Release 6.2 update 7
Expo′sito RR, Taboada GL, Ramos S, Tourin˜o J, Doallo R (2012) Evaluation of messaging middleware for high-performance cloud computing. Pers Ubiquit Comput. doi:10.1007/s00779-012-0605-3
Chung K-Y, Yoo J, Kim KJ (2013) Recent trends on mobile computing and future networks. Pers Ubiquit Comput. doi:10.1007/s00779-013-0682-y
Kohtake N, Rekimoto J, Anzai Y (2001) Info Point: a device that provides a uniform user interface to allow appliances to work together over a network. Pers Ubiquit Comput 5:264–274
Chang R-I, Chuang C-C (2012) A new spatial IP assignment method for IP-based wireless sensor networks. Pers Ubiquit Comput 16:913–928
Becker D, Savarese DF, Sterling T, Dorband JE, Ranawake UA, Packer CV (1995) Beowulf: a parallel workstation for scientific computation. In: Proceedings of the 24th International Conference on Parallel Processing, I. pp 11–14
OSCAR: open source cluster application resources, http://www.openclustergroup.org
Rocks cluster distribution, http://www.rocksclusters.org/Rocks/
Han S, Goldstein R (2008) The heat/mass transfer analogy for a simulated turbine endwall with thermal boundary layer measurement and naphthalene sublimation method. Int J Heat Mass Transfer 51:3227–3244
Han S, Goldstein R (2007) Thermal boundary layer measurement on turbine endwall and blade surface. Int J Heat Mass Transfer 129:1384–1394
Han S, Goldstein R (2008) The heat/mass transfer analogy for a simulated turbine blade with thermal boundary layer measurement and naphthalene sublimation method. Int J Heat Mass Transfer 51:5209–5225
Kulkarni KS, Han S, Goldstein R (2011) Numerical simulation of thermal boundary layer profile measurement. Int J Heat Mass Transfer 47:869–877
Sloan JD (2005) High performance Linux clusters with OACAR, Rocks, opemMosix and MPI O’Reilly
Menter FR (1994) Two-equation eddy viscosity turbulence models for engineering applications. AIAA J 32:1598–1605
Patankar SV (1980) Numerical heat transfer and fluid flow. McGRAW-HILL Book Company, New york
Kershaw DS (1978) The incomplete Cholesky-conjugate gradient method for the iterative solution of systems of linear equations. J Comput Phys 26:43–65
Snir M, Otto S, Huss-Lederman S, Walker D, Dongarra J (1996) MPI: The Complete Reference. The MIT Press, Cambridge
Carey GF, Shen Y, McLay RT (1998) Parallel conjugate gradient performance for least-squares finite elements and transport problems. Int J Numer Methods Fluids 28:1421–1440
Choi HG, Kang SW, Yoo JY (2008) Parallel large eddy simulation of turbulent flow around MIRA model using linear equal-order finite element method—Parallel large eddy simulation around MIRA model. Int J Numer Meth Fluids 56:823–843
Acknowledgments
This work was supported by the IT R&D program of MSIP/KEIT. [10044910, Development of Multi-modality Imaging and 3D Simulation-Based Integrative Diagnosis-Treatment Support Software System for Cardiovascular Diseases].
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 The coefficients of the shear stress transport model
The set of empirical constants Φ = (σ k , σ ω , β, γ) used in the baseline model was calculated from two sets of constants Φ1 and Φ2 as follows:
where the set of constants, Φ1, was derived from the original k − ω model such that
and the set of constants, k − ω, was derived from the standard k − ω model such that
F 1 can be expressed as
where y is the distance to the next surface and CD kω is the positive portion of the cross-diffusion term
The eddy viscosity is defined as
where Ω is the absolute value of the vorticity. F 2 can be expressed as
Rights and permissions
About this article
Cite this article
Han, S., Choi, H.G. Investigation of the parallel efficiency of a PC cluster for the simulation of a CFD problem. Pers Ubiquit Comput 18, 1303–1314 (2014). https://doi.org/10.1007/s00779-013-0733-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00779-013-0733-4