Parallelization and optimization of electrostatic Particle-in-Cell/Monte-Carlo Coupled codes as applied to RF discharges

https://doi.org/10.1016/j.cpc.2009.02.009Get rights and content

Abstract

We have developed a full paralleled 2D electrostatic Particle-in-Cell/Monte-Carlo Coupled (PIC-MCC) code for capacitively coupled plasma (CCP) simulations. In this code, we distributed the grid between processors along radial direction, and Poisson equation is solved accordingly paralleled. We applied a couple of numerical accelerating technologies: paralleled fast Poisson solver, assembler pushing code, particle sorting and so on. Theoretical analysis and numerical benchmark showed that this parallel framework had good efficiency and scalability. The framework of the code and the optimization technologies and algorithms are discussed, benchmarks and simulation results are also shown.

Introduction

Capacitively coupled plasmas (CCP) had been widely used in plasma processing and received intense investigations in recent years [1]. In general, there are three fundamental methods in theoretical studies of CCP: analytical model, fluid and Particle-in-Cell/Monte-Carlo Coupled (PIC/MCC) simulations. With practical physics parameters, plasma properties can be investigated partially or fully. Analytical model [2], [3], [4] adopted many assumptions and can only give qualitative results. Fluid simulations [5], [6], [7] make assumptions on the velocity distributions of the electrons and ions to be Maxwellian, therefore only plasma density and field can be obtained, ion energy and angle distribution can be calculated only by non-consistent off-line Monte-Carlo simulations. Kinetic effects such as stochastic electron heating cannot be investigated by fluid codes. PIC/MCC [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20] is a full self-consistent method from first principles, plasma properties can be given once with only practical parameters, thus PIC-MCC method is very useful to experimentalists who are designing new plasma processing equipments.

However, due to the PIC/MCC simulations are computationally expensive, most PIC/MCC simulations are one-dimensional [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20]. Recently researches, such as multi-frequency driven CCP [10], [11], [12], [13], [21], standing wave and skin effects in high frequency CCP [22] and combined RF and dc-driven CCP [23], [24], show that dimension effects may drastically change the properties of CCP. Although 2D fluid simulations can give some insights of these problems, fast 2D PIC-MCC simulations [24], [25], [26], [27], [28], [29], [30] are still desired to study the kinetic and dimensional effects, especially for the reactor designers. PIC-MCC simulations of CCP need to run the code for about 800 RF cycles in order to reach equilibrium. The simulation may take 106 or more time steps. For 1D simulations, the cell number Nz along Z direction is 102 to 104 and 10 to 103 particles per cell or 104 to 106 particles total are traced. It will take several to hundreds hours on a normal PC. So the 2D simulations must be executed parallelized. In fact, the simulating costs long time even on parallel computers. Therefore the algorithm of simulation must be carefully optimized.

The optimization of PIC code includes two aspects: first, the algorithms used in simulation should run very fast when applied serially. Second, the algorithms should have good parallel efficiencies when run paralleled. However, the two objects conflict often.

In the following sections, we will describe a paralleled electrostatic PIC/MCC simulating code for the CCP simulations and analyze some optimization technologies. In the codes, we give out the parallelization of fast Poisson solver by introduce a pipeline form of tridiagonal solver, and make up a very fast particle pushing subroutine by assembly coding.

Section snippets

Algorithms of simulation

Consider a 2D simulation of CCP in cylindrical coordinates, we can use an regular grid with Nr*Nz cells, where the Nr is the cell number along R and which is typically much larger than the cell number along Z (Nz). In typical simulations for practically reactors, the total cells number will be more than 106 and the particles number will be more than 108. The running time in a modern PC will be about several thousand hours, this is practically unacceptable. Parallel computation is thus a

Results and benchmarks

For the 2d3v PIC-MCC simulations, we have built a 8 nodes PC clusters: every nodes have an Intel Core2 E4500 CPU, 2 G RAM. All of the nodes are connected with 1000M full speed ethernet network. We run a discharge simulation for check and benchmark. The physical and numerical parameters are as follows: the electrodes distance is 2 cm, divided to 512 cells uniformly. Square cells are used and the cell number along R is 800, with a gap of 100 cells. In the gap, the potential are linearly

Conclusion

In the present work, two parallel frameworks for 2d3v electrostatic PIC-MCC simulations for CCP are theoretically and practically compared and discussed. The major performance factors of the simulation are analyzed. It showed that the method of distributing the grids to the processors had much more advantages than randomly distributing the particles. With properly load balancing, more than 80% parallel efficiency can be achieved. And the accelerating ratio becomes well on the cluster.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 10635010 and No. 10572035).

References (54)

  • V. Vahedi et al.

    Comput. Phys. Commun.

    (1995)
  • J.P. Verboncoeur et al.

    Comput. Phys. Commun.

    (1995)
  • C. Nieter et al.

    J. Comput. Phys.

    (2004)
  • V.K. Decyk et al.

    Comput. Phys. Commun.

    (2004)
  • O. Chanrion et al.

    J. Comput. Phys.

    (2008)
  • P. Messmer et al.

    Comput. Phys. Commun.

    (2004)
  • K.J. Bowers

    J. Comput. Phys.

    (2001)
  • D.V. Anderson et al.

    Comput. Phys. Commun.

    (1995)
  • D. Tskhakaya et al.

    J. Comput. Phys.

    (2007)
  • R.J. Thacker et al.

    Comput. Phys. Commun.

    (2006)
  • M.A. Lieberman et al.

    Principles of Plasma Discharges and Materials Processing, Processing

    (2005)
  • M.A. Lieberman

    IEEE Trans. Plasma Sci.

    (1988)
  • H.C. Kim et al.

    Phys. Plasmas

    (2003)
  • J. Robiche et al.

    J. Phys. D: Appl. Phys.

    (2003)
  • A. Salabas et al.

    Plasma Sources Sci. Technol.

    (2005)
  • P.C. Boyle et al.

    J. Phys. D: Appl. Phys.

    (2004)
  • S. Rauf et al.

    Plasma Sources Sci. Technol.

    (2008)
  • C.K. Birdsall et al.

    Plasma Physics via Computer Simulation

    (1985)
  • C.K. Birdsall

    IEEE Trans. Plasma Sci.

    (1991)
  • E. Kawamura et al.

    Phys. Plasmas

    (2006)
  • H.C. Kim et al.

    J. Vac. Sci. Technol. A

    (2006)
  • P.C. Boyle et al.

    Plasma Sources Sci. Technol.

    (2004)
  • V. Georgieva et al.

    Phys. Rev. E

    (2006)
  • F.X. Bronold et al.

    J. Phys. D: Appl. Phys.

    (2007)
  • K. Matyash et al.

    J. Phys. D: Appl. Phys.

    (2007)
  • V. Vahedi et al.

    Plasma Sources Sci. Technol.

    (1993)
  • V. Vahedi et al.

    Plasma Sources Sci. Technol.

    (1993)
  • Cited by (20)

    View all citing articles on Scopus
    View full text