Elsevier

Journal of Computational Physics

Volume 230, Issue 23, 20 September 2011, Pages 8427-8451
Journal of Computational Physics

Weighted Flow Algorithms (WFA) for stochastic particle coagulation

https://doi.org/10.1016/j.jcp.2011.07.027Get rights and content

Abstract

Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

Introduction

Atmospheric aerosol particles consist of a complex mixture of different chemical species [1] (current models typically contain on the order of 20 different species [2]), with particle diameters ranging from a few nanometers to tens of micrometers. To deal with such high-dimensional and multi-scale data, existing models of atmospheric aerosols on the regional or global scale make large approximations. These generally consist of assuming that all particles within a spatial grid-cell have a composition that depends only on diameter or a few other simple parameters as it is done in sectional aerosol models [e.g. [2], [3], [4], [5], [6]] or modal aerosol models [e.g. [7], [8], [9], [10]]. While this makes computation much cheaper, it introduces errors, since it artificially averages the composition of individual particles over a certain size range. These errors are not well-quantified, but could be significant for estimating the effect of aerosols on climate [11].

Recently, particle-resolved aerosol models have been introduced as a way to avoid making a priori assumptions about the evolution of particle composition and to more precisely model aerosol microphysics [12], [13], [14]. These stochastic models simulate a representative group of particles distributed in composition space, treating coagulation, condensation/evaporation, and other important processes on an individual particle level. Applying such a Monte Carlo approach for simulating the evolution of particle distributions dates back to Gillespie [15], who developed the exact Stochastic Simulation Algorithm [see also [16], [17], [18]] to treat the stochastic collision-coalescence process in clouds. While the particle-based models are a step forward in accurately representing the detailed interactions which take place amongst aerosol particles, they can be very expensive to run, and are currently not appropriate for simulating aerosol dynamics and chemistry on meso-or macro-scales.

The main thrust of this paper is to introduce an alternate simulation scheme, which reduces the total error for a given cost. To this end, it is useful to consider the types of error inherent in these simulations. There are three distinct sources of error: a finite-number error, a finite-ensemble error, and a time-discretization error. Since computational power is finite, any individual simulation can only treat a finite number of particles, and any aggregate simulation can consider only a finite ensemble of such individual simulations. Moreover, due to the very short timescales in the problem, performing a direct stochastic simulation is too expensive, and one needs to introduce a discrete timestep. We show below that the most significant error in the parameter regimes we consider is—by far—the finite-ensemble error (see, e.g., Fig. 2, Fig. 4). Since the error due to the finiteness of the ensemble is, at its root, a statistical error, the ensemble variance quantifies this error.

The finite-ensemble error is dominant because there are aerosol particles which are comparatively rare yet nonetheless important to the evolution of the aerosol population. Since these particles are rare, a small ensemble size leads to significant sampling errors in these subpopulations; since they are important to the dynamics, these sampling errors percolate throughout the entire population. More concretely, due to the shape of ambient aerosol size distributions and the subsequent evolution, the number of particles in the sub-micron size range is usually several orders of magnitude larger than the particles in the super-micron size range. Both sub-populations are important since the small particles dominate particle number concentration whereas the large particles dominate particle mass concentrations. Moreover, the most likely coagulation events involve interactions of small and large particles. Thus either the population of large particles is under-resolved and the accuracy of the simulation is significantly compromised, or the total number of computational particles in the entire simulation must be immense.

Our way forward is to add the abstraction that a single computational particle can correspond to some number of real particles, i.e. that each computational particle is “weighted” by an appropriate factor. There is a wide variety of choices to be made here: e.g., how one chooses the weighting, the details of how to implement the effect of this weighting, etc. Our approach is to think of each computational particle as corresponding to some number of real particles, and instead of performing some action on a real particle, we perform the action on the computational particle with some probability; this probability scales inversely to the weight. Thus the algorithm can be thought of as a stochastic splitting scheme, where one first determines the type of event as would be done in a direct simulation, and then one decides whether we perform the event on the computational particle. For example, if we have a particle of weight 103 in our population, and a direct simulation would call for this particle to be removed, then we remove it with probability 10−3.

Since our simulations involve coagulation events, many events correspond to “triplets” of computational particles (the two particles before the coagulation and the one after). Thus there are many choices for how a stochastic algorithm will treat this triplet, as it could deal with each particle independently, or do so with some correlations. We analyze the most general possible form of such algorithms and optimize over different possible choices. In particular, the particle weighting method that we develop here has smoothly varying particle weights (rather than binning particles into groups with the same weight) and is a function of particle composition (rather than having weights attached to particles [19]). More specifically, the latter means that the weight of a given particle can change during the course of the simulation as its composition or total mass change; the particles are not “tagged” with a constant weight.

Again, we stress that the purpose of this weighting scheme is to reduce ensemble variance, and we computationally over-represent the rare but important particles and under-represent the common and less important particles. As we show below, this tradeoff significantly improves performance.

The idea of weighting particles has been put forward in various diverse forms by, e.g., Babovsky [20], Eibeck and Wagner [21], [22], [23], [24], [25], Kolodko and Sabelfeld [26], Shima et al. [19], Irizarry [27], [28], Wells and Kraft [29], Zhao et al. [30], Zhao and Zheng [31], Zhao et al. [32], and Debry et al. [33]. Our approach is most similar in spirit to that of Kolodko and Sabelfeld [26]; what was considered there was a class of algorithms for pure coagulational processes, and what we do here is present a broad generalization of their family of algorithms for coagulation coupled to the other processes we consider. A large number of papers have considered the Mass Flow Algorithm (MFA) applied to a wide variety of physical problems [21], [22], [23], [24], [25], [29], [34], [35], [36], [37], [38]; we show below that the MFA algorithm is one of the members of the family that we consider below. Another approach is the multi-Monte Carlo method [30], [31], [32], [39] which introduces a splitting scheme very similar to ours but has proposed a different class of weighting schemes.

After we introduce and analyze this family of weighting and splitting schemes, we quantify the performance of using different weighting functions by applying the Weighted Flow Algorithm to the recently-developed particle-resolved model PartMC-MOSAIC [14], [2]. PartMC is a Lagrangian box model that explicitly stores the composition of many individual aerosol particles within a well-mixed computational volume. Relative particle positions within this computational volume are not tracked, but rather the coagulation process is simulated stochastically by assuming that coagulation events are Poisson distributed with a Brownian kernel. Apart from coagulation, the processes of particle emissions from various sources and dilution with the background are treated stochastically. PartMC was coupled with the aerosol chemistry model MOSAIC [2], which simulates the gas and particle phase chemistries, particle phase thermodynamics, and dynamic gas-particle mass transfer in a deterministic manner. The coupled model system, PartMC-MOSAIC, predicts number, mass, and full composition distributions of the aerosol populations. The current version of PartMC is available under the GNU General Public License (GPL) at http://lagrange.mechse.illinois.edu/mwest/partmc/, and the MOSAIC code is available upon request from Zaveri.

We use the urban plume scenario described in Zaveri et al. [11] where we simulated the evolution of the mixing state of an aerosol population in a polluted environment. We show that choosing weighting functions of the form W(μ) = μα can greatly increase the accuracy and performance of the total simulation, and that different values of α are optimal for different observables.

This manuscript is organized as follows. In Section 2 we present the governing equations for the coupled gas-aerosol box model and discuss the approximations needed by this model of the physical system. Section 3 introduces the weighted particle methods. Section 4 describes the mathematical formalism of weighted particle methods for coagulation and condensation. In Section 5 we present the weighted flow algorithms (WFA) as implemented in PartMC. Finally, Section 6 presents the application of PartMC-MOSAIC using WFA to the urban plume scenario. A list of symbols used in the paper is provided in Table A.2.

Section snippets

Continuous model system equations

We model a Lagrangian air parcel containing gas species and aerosol particles. We assume that environmental parameters and gas concentrations are homogeneous within the parcel and we do not track aerosol particle locations.

Each aerosol particle is described by a vector μRA with the ath component μa  0 giving the mass of species a. For a population of aerosol particles we denote by N(μ, t) the cumulative aerosol number distribution at time t and constituent masses μRA, which is defined to be the

Weighted particle methods and superparticles

To discretize the aerosol number distribution n(μ) at a given instant of time within the Lagrangian air parcel, we can sample a finite number Np of computational particles (written as a list Π={μ1,μ2,,μNp}) within a computational volume V. Multiple particles may have the same mass vector, making Π a multiset in the sense of Knuth [40, p. 473].

As we will see in Section 6, if each computational particle corresponds to the same number concentration, then this representation can give poor

Particle evolution equations

In this section we will describe the mathematical formalism behind a particle method algorithm which takes coagulation and advection into account in a systematic way. We will motivate each of these two ideas separately: in Section 4.1 we describe how to develop a weighted particle method for pure coagulation processes and in Section 4.2 we consider pure advection processes.

Weighted Flow Algorithms (WFA)

The Weighted Flow Algorithms (WFA) for coagulation described in Section 4.1 can be implemented with any of the standard Gillespie-type methods [15], [16], [17], [44] with the modified kernel (12a), requiring only the additional step of accepting death events for particle removal with probability (12b), and accepting birth events for new particles with probability (12c). For computational efficiency we implemented the WFA in an approximate fixed-timestep mode with a majorized explicit

Aerosol distribution functions

To facilitate the discussion of the results we define the following quantities: We take N(D) to be the cumulative number distribution, giving the number of particles per volume that have diameter less than D. Similarly, the cumulative dry mass distribution M(D) gives the dry mass per volume of particles with diameter less than D. We write N = N(∞) and M = M(∞), for the total number and dry mass concentrations, respectively. Given the cumulative distributions, we define the number distribution n(D)

Conclusions

In this paper we describe the development and application of the Weighted Flow Algorithms (WFA) for particle-resolved methods, a generalized way of weighting computational particles to improve efficiency. We have shown that there exist particle methods which work for any given weighting function and how different weighting functions are appropriate for measuring different observables. In particular, for the application shown in this paper, we used weighting functions that are power laws in

Acknowledgments

The authors acknowledge funding from the National Science Foundation (NSF) under Grant CMG-0934491.

References (53)

  • M. Celnik et al.

    Coupling a stochastic soot population balance to gas-phase chemistry using operator splitting

    Combust. Flame

    (2007)
  • M. Celnik et al.

    A predictor-corrector algorithm for the coupling of stiff ODEs to a particle population balance

    J. Comput. Phys.

    (2009)
  • H. Zhao et al.

    A new event-driven constant-volume method for solution of the time evolution of particle size distribution

    J. Comput. Phys.

    (2009)
  • N. Riemer et al.

    Estimating black carbon aging time-scales with a particle-resolved aerosol model

    J. Aerosol Sci.

    (2010)
  • M.J. Cubison et al.

    The influence of chemical composition and mixing state on Los Angeles urban aerosol on CCN number and cloud properties

    Atmos. Chem. Phys.

    (2008)
  • R.A. Zaveri et al.

    Model for simulating aerosol interactions and chemistry (MOSAIC)

    J. Geophys. Res.

    (2008)
  • M.J. Kleeman et al.

    A 3D Eulerian source-oriented model for an externally mixed aerosol

    Environ. Sci. Technol.

    (2001)
  • M.Z. Jacobson

    Analysis of aerosol interactions with numerical techniques for solving coagulation, nucleation, condensation, dissolution, and reversible chemistry among multiple size distributions

    J. Geophys. Res.

    (2002)
  • P.J. Adams et al.

    Global concentration of tropospheric sulphate, nitrate and ammonium simulated in a general circulation model

    J. Geophys. Res.

    (1999)
  • J. Wilson et al.

    A modeling study of global mixed aerosol fields

    J. Geophys. Res.

    (2001)
  • P. Stier et al.

    The aerosol-climate model ECHAM5-HAM

    Atmos. Chem. Phys.

    (2005)
  • F.S. Binkowski et al.

    The regional particulate matter model, 1. Model description and preliminary results

    J. Geophys. Res.

    (1995)
  • N. Riemer et al.

    Modeling aerosols on the mesoscale γ, part I: Treatment of soot aerosol and its radiative effects

    J. Geophys. Res.

    (2003)
  • R. Zaveri et al.

    Effect of aerosol mixing-state on optical and cloud activation properties

    J. Geophys. Res

    (2010)
  • N. Riemer et al.

    Simulating the evolution of soot mixing state with a particle-resolved aerosol model

    J. Geophys. Res.

    (2009)
  • D.T. Gillespie

    An exact method for numerically simulating the stochastic coalescence process in a cloud

    J. Atmos. Sci.

    (1975)
  • Cited by (46)

    • Monte Carlo simulation of polydisperse particle deposition and coagulation dynamics in enclosed chambers

      2021, Vacuum
      Citation Excerpt :

      However, one disadvantage of MC method is its high requirement of computational cost. Weighted Monte Carlo methods [32–34] were proposed to obtain higher precision and efficiency of MC methods. Herein, the further development and improvement of the differentially weighted operator splitting Monte Carlo (DWOSMC) method is presented for describing polydisperse particle deposition and coagulation dynamics in enclosed chambers which is based on the differentially weighted Monte Carlo (DWMC) algorithm proposed by Zhao et al. [1,35,36].

    • Error analysis in stochastic solutions of population balance equations

      2020, Applied Mathematical Modelling
      Citation Excerpt :

      If every numerical particle represents the same number of physical particles (i.e., equal weights), then large particles, whose number density is typically considerably lower than that of small particles, are generally poorly represented (or not represented at all) in the numerical simulation. In order to simulate large particles more precisely, various differentially weighted schemes [37,40,45–48] have been developed to increase their resolution. ( iii) Introducing proper approximations.

    View all citing articles on Scopus
    View full text