Skip to main content

Jump Longer to Jump Less: Improving Dynamic Boundary Projection with h-Scaling

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13479))

Abstract

The master equation describes exactly the dynamics of a Markov Population Process (MPP) by associating one differential equation for each discrete state of the process. It is well known that MPPs are prone to suffer from the so-called curse of dimensionality, making the master equation intractable in most cases. We propose a novel approach, called h-scaling, that covers the state space of an MPP with a smaller number of states by an appropriate re-scaling of the MPP transition rate functions. When the original state space is bounded, this procedure may significantly reduce the number of the states while returning an approximate master equation that still retains good accuracy. We present h-scaling together with some theoretical results on asymptotic correctness and numerical examples taken from the performance evaluation literature. Moreover, we show that h-scaling can be combined with a recently proposed framework called dynamic boundary projection, which couples subsets of the master equation with mean-field approximations, to further reduce the number of equations without penalizing accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Anselmi, J., Verloop, I.M.: Energy-aware capacity scaling in virtualized environments with performance guarantees. Perform. Eval. 68(11), 1207–1221 (2011)

    Article  Google Scholar 

  2. Baskett, F., Chandy, K.M., Muntz, R.R., Palacios, F.G.: Open, closed, and mixed networks of queues with different classes of customers. J. ACM 22(2), 248–260 (1975)

    Article  MathSciNet  Google Scholar 

  3. Benaim, M., Le Boudec, J.Y.: A class of mean field interaction models for computer and communication systems. Perform. Eeval. 65(11–12), 823–838 (2008)

    Article  Google Scholar 

  4. Bortolussi, L., Hillston, J., Latella, D., Massink, M.: Continuous approximation of collective system behaviour: a tutorial. Perf. Eeval. 70(5), 317–349 (2013)

    Article  Google Scholar 

  5. Buchholz, P.: Exact and ordinary lumpability in finite Markova chains. J. Appl. Probab. 31(1), 59–75 (1994)

    Article  MathSciNet  Google Scholar 

  6. Cao, Y., Li, H., Petzold, L.: Efficient formulation of the stochastic simulation algorithm for chemically reacting systems. J. Chem. Phy. 121(9), 4059–4067 (2004)

    Article  Google Scholar 

  7. Ciocchetta, F., Degasperi, A., Hillston, J., Calder, M.: Some investigations concerning the CTMC and the ode model derived from bio-PEPA. Electr. Notes Theoret. Comput. Sci. 229(1), 145–163 (2009)

    Article  MathSciNet  Google Scholar 

  8. Darling, R.: Fluid limits of pure jump Markov processes: a practical guide. arXiv preprint math/0210109 (2002)

    Google Scholar 

  9. Darling, R., Norris, J.R.: Differential equation approximations for markov chains. Probability surveys 5, 37–79 (2008)

    Article  MathSciNet  Google Scholar 

  10. Gast, N., Bortolussi, L., Tribastone, M.: Size expansions of mean field approximation: transient and steady-state analysis. Perform. Eval. 129, 60–80 (2019)

    Article  Google Scholar 

  11. Gast, N., Van Houdt, B.: A refined mean field approximation. In: Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 1, pp. 1–28 (2017)

    Google Scholar 

  12. Kurtz, T.G.: Solutions of ordinary differential equations as limits of pure jump Markov processes. J. Appl. Prob. 7(1), 49–58 (1970)

    Article  MathSciNet  Google Scholar 

  13. Liu, Y., Li, W., Masuyama, H.: Error bounds for augmented truncation approximations of continuous-time Markova chains. Oper. Res. Lett. 46(4), 409–413 (2018)

    Article  MathSciNet  Google Scholar 

  14. Minnebo, W., Van Houdt, B.: A fair comparison of pull and push strategies in large distributed networks. IEEE/ACM Trans. Netw. 22(3), 996–1006 (2013)

    Article  Google Scholar 

  15. Munsky, B., Khammash, M.: The finite state projection algorithm for the solution of the chemical master equation. J. Chem. Phy. 124(4) (2006)

    Google Scholar 

  16. Parekh, A.K., Gallager, R.G.: A generalized processor sharing approach to flow control in integrated services networks: the single-node case. IEEE/ACM Trans. Netw. 3, 344–357 (1993)

    Google Scholar 

  17. Parekh, A.K., Gallager, R.G.: A generalized processor sharing approach to flow control in integrated services networks: the multiple node case. IEEE/ACM Trans. Netw. 2(2), 137–150 (1994)

    Google Scholar 

  18. Randone, F., Bortolussi, L., Tribastone, M.: Refining mean-field approximations by dynamic state truncation. Proc. ACM Measur. Anal. Comput. Syst. 5(2), 1–30 (2021)

    Google Scholar 

  19. Van Kampen, N.G.: Stochastic Processes in Physics and Chemistry, vol. 1. Elsevier, New York (1992)

    Google Scholar 

  20. Xie, Q., Dong, X., Lu, Y., Srikant, R.: Power of d choices for large-scale bin packing: A loss model. ACM SIGMETRICS Perform. Eval. Rev. 43(1), 321–334 (2015)

    Google Scholar 

  21. Yang, X., De Veciana, G.: Service capacity of peer to peer networks. In: IEEE INFOCOM 2004, vol. 4, pp. 2242–2252. IEEE (2004)

    Google Scholar 

  22. Zhu, L., Casale, G., Perez, I.: Fluid approximation of closed queueing networks with discriminatory processor sharing. Perform. Eval. 139 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesca Randone .

Editor information

Editors and Affiliations

Appendix

Appendix

1.1 8.1 Derivation of Scaled DBP

Having defined the truncations for \(\mathcal {S}^h\) as in Sect. 3.2 we proceed as in the derivation for DBP.

The border sets for the scaled truncations are defined as:

$$\begin{aligned} \partial \mathcal {T}^h_l(n,y)&= \left\{ x \in \mathcal {T}^h(n,y) : x+hl \not \in \mathcal {T}^h(n,y) \right\} , \ \text {for } l \in \mathcal {L},\\ \partial \mathcal {T}^h(n,y)&= \bigcup _{l \in \mathcal {L}} \mathcal {T}^h_l(n,y) = \left\{ x \in \mathcal {T}^h(n,y) : \exists \, l \in \mathcal {L} \text { s.t. } x+hl \not \in \mathcal {T}^h(n,y) \right\} \end{aligned}$$

We can then define the boundary projection of \(X^h\) on \(\mathcal {T}^h(n,y)\), in which every jump from \(x \in \partial \mathcal {T}_l(n,y)\) to \(x'\) is redirected with same rate to \(x^*\) defined as:

$$\begin{aligned} x^*_i = {\left\{ \begin{array}{ll} \min (y_i+h\lfloor \frac{n_i}{h}\rfloor , x_i') &{} \text{ if } x_i'>x_i \\ \max (y_i, x_i') &{} \text{ if } x_i'<x_i \\ x_i &{} \text{ if } x_i'=x_i. \end{array}\right. } \end{aligned}$$

After performing the augmentation we get the jump vectors \(l^{(n,h)}(x)\) defined exactly as before. Then, letting \(X^{(n,h)}_y\) be the boundary projection of \(X^h\) on \(\mathcal {T}^h(n,y)\), its transition matrix \(Q^{(n,h)}(y)\) can be written for \(x, x' \in \mathcal {T}^h(n,0)\) as:

$$\begin{aligned}{}[Q^{(n,h)}(y)]_{x,x'} = {\left\{ \begin{array}{ll} \sum _{l \in \mathcal {L}} \mathbb {I}_{\{x'+l^{(n,h)}(x')=x\}} \frac{1}{h}\beta _l(x' + y) &{} \text{ if } x \ne x'\\ - \sum _{l \in \mathcal {L}} \mathbb {I}_{\{l^{(n,h)}(x)\ne 0\}} \frac{1}{h}\beta _l(x + y) &{} \text{ if } x = x'. \end{array}\right. } \end{aligned}$$

So the ME for \(X^{(n,h)}_y\) as:

$$\begin{aligned} \frac{dP^{(n,h)}_y}{dt} = Q^{(n,h)}(y) P^{(n,h)}_y({}\cdot {};t) \end{aligned}$$

where \(P^{(n,h)}_y({}\cdot {};t)\) is an \(\mathcal {N}^h(n)\)-dimensional vector.

Again, to pass to DBP, we need to define the functions:

$$ \varPi ^{(n,h)}_i(x,y) = {\left\{ \begin{array}{ll} x_i &{} x_i < y_i \\ y_i+h\lceil x_i - \left( y_i + \lfloor \frac{n_i}{h} \rfloor \right) \rceil &{} x_i > y_i+n_i \\ y_i &{} y_i \le x \le y_i+n_i. \end{array}\right. } \forall \,x,y \in \mathcal {S}^h$$
$$\mathcal {Y}^h_l(n,x) = \varPi ^{(n,h)}(x+l,0) \quad \forall \, l \in \mathcal {L}, \, \forall \, x \in \partial \mathcal {T}^h_l(n,0).$$

Observe that the second case in the definition of \(\varPi ^{(n,h)}(x,y)\) is motivated by the fact that x may not be in the form \(y + h(k_1e_1 + \ldots + k_me_m)\), and, to mirror what happens in classic DBP, we want the function to return the closes \(y'\) in this form so that \(\mathcal {T}^h(n,y')\) contains x.

Then the equations for scaled DBP with parameter n are given by:

$$\begin{aligned} \begin{aligned} \frac{dY^{(n,h)}}{dt}&= \sum _{l \in \mathcal {L}} \sum _{x \in \partial \mathcal {T}^h_l(n,0)} \mathcal {Y}^h_l(n,x)\frac{1}{h}\beta _l(x+Y^{(n,h)}(t))P^{(n,h)}(x;t) \\ \frac{dP^{(n,h)}}{dt}&= Q^{(n,h)}(Y^{(n,h)}(t)) P^{(n,h)}({}\cdot {},t). \end{aligned} \end{aligned}$$
(19)

Again, supposing \(X(0)=x_0\) with probability 1, to define the initial condition we set:

$$\begin{aligned} \left[ Y^{(n)}(0) \right] _i&= \max \left( 0, x_{0,i} - h \Big \lfloor \frac{n_i}{h} \Big \rfloor \right) \\ x^*_0&= h\Big \lfloor \frac{x_0-Y^{(n)}(0)}{h} \Big \rfloor \\ P^{(n)}(x;0)&= {\left\{ \begin{array}{ll} 1 &{}\text { if } x = x_0^*,\\ 0 &{} \text { else.} \end{array}\right. } \end{aligned}$$

1.2 8.2 Equations for Example 3

Equations for scaled DBP read:

$$\begin{aligned} \begin{aligned} \frac{dY^{(n,h)}}{dt}&= -\frac{\mu }{h}\min \left( Y^{(n,h}(t),k\right) P^{(n,h)}(0;t) + \frac{\lambda }{h}P^{(n,h)}\left( h\left( \Big \lfloor \frac{N}{h}\Big \rfloor -1\right) ;t\right) \\ \frac{dP^{(n,h)}(x)}{dt}&= {\left\{ \begin{array}{ll} -\frac{\lambda }{h}P^{(n,h)}(0;t) + \frac{\mu }{h}\min \left( h+Y^{(n,h)}(t),k\right) P^{(n,h)}(h;t) &{} x=0 \\ -\left( \frac{\lambda }{h}+\frac{\mu }{h}\min \left( x+Y^{(n,h)}(t),k\right) \right) P^{(n,h)}(x;t) \\ \quad \,\,\, + \frac{\lambda }{h}P^{(n,h)}(x-h;t) &{} \\ \qquad \quad + \frac{\mu }{h}\min \left( x+h+Y^{(n,h)}(t),k\right) P^{(n,h)}(x+h;t) &{}x\ne 0, h\lfloor \frac{N}{h}\rfloor \\ -\frac{\mu }{h}\min \left( h\lfloor \frac{N}{h}\rfloor +Y^{(n,h)}(t),k\right) P^{(n,h)}\left( h\lfloor \frac{N}{h}\rfloor ;t\right) &{} \\ \qquad \qquad \quad \,\,\,\, + \frac{\lambda }{h} P^{(n,h)}\left( h\left( \lfloor \frac{N}{h}\rfloor -1\right) ;t\right) &{} x = h\lfloor \frac{N}{h}\rfloor \end{array}\right. } \end{aligned} \end{aligned}$$

1.3 8.3 Proof of Theorem 4

Theorem 5

Suppose that for the sequence \(\left( X^N \right) _{N \ge N_0}\) the hypotheses of Theorem 2 are verified and, in addition:

  • equation (3) admits a globally asymptotically stable equilibrium \(x^*\);

  • for each N \(X^N(0)=N\hat{x}_0\);

  • for each N \(\gamma _N = N\).

Then, letting \(\mu (t)\) and \(\varSigma (t)\) denote the mean and the covariance matrix of limiting Gaussian process for the original sequence, we have that the sequence of approximating processes \(\left( X^{N,h}\right) _{N \ge N_0}\) admits a Gaussian limiting process with mean \(\mu ^h(t)\) and covariance matrix \(\varSigma ^h(t)\) such that:

$$ \lim _{t \rightarrow \infty } \mu ^h(t) = \lim _{t \rightarrow \infty } \mu (t) = 0 \text { and } \varSigma ^h(t) = \varSigma (t) \, \forall \, t \ge 0.$$

Proof

Theorem 2 guarantees that under the hypothesis \(\mu (t)\) and \(\varSigma (t)\) exist.

The rest of the proof is obtained by following the same derivation used in [19] with the ansatz:

$$\begin{aligned} \hat{X}^{N,h}(t) = \hat{x}(t) + \sqrt{\frac{h}{N}}\xi ^h(t). \end{aligned}$$
(20)

and verifying that \(\xi ^h(t)\) is a Gaussian Process whose mean \(\mu ^h(t)\) and covariance \(\varSigma ^h(t)\) satisfy exactly the same ODEs as \(\mu (t)\) and \(\varSigma (t)\), i.e. (5) and (6).

Furthermore, in the sequence of the approximated process \(\left( X^{N,h} \right) _{N \ge 0}\) we have redefined the initial conditions as \(X^{N,h}(0) = h \lfloor \frac{N\hat{x}_0}{h}\rfloor \), while the initial condition for the deterministic process remains unchanged. Therefore, when setting the initial condition for \(\mu ^h(y)\) we need to take into account that for the ansatz to be valid at time \(t=0\) the Gaussian Limit Process possibly has non-zero mean, namely \(\mu ^h(0) = \sqrt{\frac{h}{N}}\left( \lfloor \frac{N\hat{x}_0}{h}\rfloor -N\hat{x}_0\right) .\)

So, in general, \(\xi ^h(t)\), describing the fluctuations of \(X^{N,h}\), is different from \(\xi (t)\), describing the fluctuations of \(X^N\), since \(\mu ^h(0) \ne \mu (0) = 0\) (observe that instead the covariance matrix is still the same, i.e. \(\varSigma ^h(t) = \varSigma (t) \, \forall \, t \ge 0\)).

However, Eq. (5) is exactly the variational equation associated with the ODEs defining the deterministic limit (3), so, regardless of its initial condition, its solution must tend to 0 as \(\hat{x}(t)\) tends to the equilibrium \(x^*\). This implies \(\lim _{t \rightarrow \infty } \mu ^h(t) = \lim _{t \rightarrow \infty } \mu (t) = 0\).

Observe that all the introduced hypotheses are needed for the correct application of the ansatz: the differentiability of the drifts is needed to apply the Taylor expansion as in [19], while the presence of a globally asymptotically stable equilibrium ensures that the ansatz remains valid for \(t \in [0, +\infty ).\)

1.4 8.4 Additional Data on Malware Propagation Model

Fig. 5.
figure 5

Average number of dormant and susceptible agents in the Malware Propagation model computed using h-scaling and scaled DBP.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Randone, F., Bortolussi, L., Tribastone, M. (2022). Jump Longer to Jump Less: Improving Dynamic Boundary Projection with h-Scaling. In: Ábrahám, E., Paolieri, M. (eds) Quantitative Evaluation of Systems. QEST 2022. Lecture Notes in Computer Science, vol 13479. Springer, Cham. https://doi.org/10.1007/978-3-031-16336-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16336-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16335-7

  • Online ISBN: 978-3-031-16336-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics