Skip to main content

Running-Time Analysis of Particle Swarm Optimization with a Single Particle Based on Average Gain

  • Conference paper
  • First Online:
Simulated Evolution and Learning (SEAL 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10593))

Included in the following conference series:

  • 3146 Accesses

Abstract

Running-time analysis of the particle swarm optimization (PSO) is a hard study in the field of swarm intelligence, especially for the PSO whose solution and velocity are encoded continuously. In this study, running-time analysis on particle swarm optimization with a single particle (PSO-SP) is analyzed. Elite selection strategy and stochastic disturbance are combined into PSO-SP in order to improve optimization capacity and adjust the direction of the velocity of the single particle. Running-time analysis on PSO-SP based on the average gain model is applied in two different situations including uniform distribution and standard normal distribution. The theoretical results show running-time of the PSO-SP with stochastic disturbance of both distributions is exponential. Besides, in the same accuracy and the same fitness difference value, running-time of the PSO-SP with stochastic disturbance of uniform distribution is better than that of standard normal distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jägersküpper, J.: How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms. Theor. Comput. Sci. 361(1), 38–56 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Huang, H., Xu, W.D., Zhang, Y.S., Lin, Z.Y., Hao, Z.F.: Runtime analysis for continuous (1 + 1) evolutionary algorithm based on average gain model. Sci. China 44, 811–824 (2014)

    Google Scholar 

  3. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: 1995 IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)

    Google Scholar 

  4. Yao, X., Xu, Y.: Recent advances in evolutionary computation. J. Comput. Sci. Technol. 21, 1–18 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence, pp. 69–73. IEEE (1998)

    Google Scholar 

  6. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127(1), 57–85 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. He, J., Yao, X.: From an individual to a population: an analysis of the first hitting time of population-based evolutionary algorithms. IEEE Trans. Evol. Comput. 6(5), 495–511 (2008)

    Google Scholar 

  8. Oliveto, Pietro S., Witt, C.: Simplified drift analysis for proving lower bounds in evolutionary computation. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 82–91. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87700-4_9

    Chapter  Google Scholar 

  9. Gutjahr, W.J.: First steps to the runtime complexity analysis of ant colony optimization. Comput. Oper. Res. 35(9), 2711–2727 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Doerr, B., Neumann, F., Sudholt, D., Witt, C.: On the runtime analysis of the 1-ANT ACO algorithm. In: Conference on Genetic and Evolutionary Computation. vol. 65, pp. 33–40. ACM (2007)

    Google Scholar 

  11. Sudholt, D., Witt, C.: Runtime analysis of binary PSO. In: Conference on Genetic and Evolutionary Computation, pp. 135–142. ACM (2008)

    Google Scholar 

  12. Qian, C., Yu, Y., Zhou, Z.H.: An analysis on recombination in multi-objective evolutionary optimization. Artif. Intell. 204(204), 99–119 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  13. Qian, C., Tang, K., Zhou, Z.-H.: Selection hyper-heuristics can provably be helpful in evolutionary multi-objective optimization. In: Handl, J., Hart, E., Lewis, P.R., López-Ibáñez, M., Ochoa, G., Paechter, B. (eds.) PPSN 2016. LNCS, vol. 9921, pp. 835–846. Springer, Cham (2016). doi:10.1007/978-3-319-45823-6_78

    Chapter  Google Scholar 

  14. Qian, C., Yu, Y., Zhou, Z.H.: Analyzing evolutionary optimization in noisy environments. Evol. Comput. 1 (2013)

    Google Scholar 

  15. Qian, C., Yu, Y., Jin, Y., Zhou, Z.H.: On the effectiveness of sampling for evolutionary optimization in noisy environments. In: Parallel Problem Solving from Nature – PPSN XIII. Springer, Heidelberg, pp. 33–55 (2014)

    Google Scholar 

  16. He, J., Yao, X.: Average drift analysis and population scalability. IEEE Trans. Evol. Comput. 21(3), 426–439 (2017)

    MathSciNet  Google Scholar 

  17. Rowe, J.E., Sudholt, D.: The choice of the offspring population size in the (1, λ) EA. Theor. Comput. Sci. 545(545), 20–38 (2014)

    Article  MATH  Google Scholar 

  18. Witt, C.: Why standard particle swarm optimisers elude a theoretical runtime analysis. In: ACM SIGEVO International Workshop on Foundations of Genetic Algorithms, FOGA 2009, Proceedings, Orlando, Florida, USA, January 9–11, 2009, pp. 13–20. DBLP (2009)

    Google Scholar 

  19. Lehre, P.K., Witt, C.: Finite first hitting time versus stochastic convergence in particle swarm optimisation. 53, 1–20 (2011)

    Google Scholar 

Download references

Acknowledgement

This work is supported by National Natural Science Foundation of China (61370102), Guangdong Natural Science Funds for Distinguished Young Scholar (2014A030306050), the Ministry of Education - China Mobile Research Funds (MCM20160206) and Guangdong High-level personnel of special support program (2014TQ01X664).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Shuling .

Editor information

Editors and Affiliations

Appendices

Appendix A Proof of Lemma 1

\( T|_{0}^{{d\left( {x_{0} } \right)}} \) is simply equal to T, the probability of calculation time t is \( P\left( {T = t} \right) \) and the difference of fitness difference is \( D_{t} = d\left( {x_{t} } \right) - d\left( {x_{t + 1} } \right) \), \( {\text{d}}\left( {x_{0} } \right) \) satisfies

$$ \begin{aligned} d\left( {x_{0} } \right) \ge E\left( {\mathop \sum \limits_{t = 0}^{T - 1} D_{t} } \right) & = \mathop \sum \limits_{t = 0}^{ + \infty } P\left( {T = t} \right) \cdot E\left( {\mathop \sum \limits_{i = 0}^{T - 1} D_{i} |T = t} \right) \\ & = \mathop \sum \limits_{t = 0}^{ + \infty } P\left( {T = t} \right) \cdot \mathop \sum \limits_{i = 0}^{T - 1} E(D_{i} |T = t )\\ & = \mathop \sum \limits_{t = 0}^{ + \infty } \mathop \sum \limits_{i = 0}^{t} P\left( {T = t} \right) \cdot E\left( {D_{i} |T = t} \right) \\ & = \mathop \sum \limits_{i = 0}^{ + \infty } \mathop \sum \limits_{t = i}^{ + \infty } P\left( {T = t} \right) \cdot E\left( {D_{i} |T = t} \right) \\ & \ge \mathop \sum \limits_{i = 0}^{ + \infty } \mathop \sum \limits_{t = i}^{ + \infty } P\left( {T \ge i} \right) \cdot P\left( {T = t|T \ge i} \right) \cdot E\left( {D_{i} |T = t} \right) \\ & = \mathop \sum \limits_{i = 0}^{ + \infty } P\left( {T \ge i} \right) \cdot \mathop \sum \limits_{t = i}^{ + \infty } P\left( {T = t|T \ge i} \right) \cdot E\left( {D_{i} |T = t \wedge T \ge i} \right) \\ & = \mathop \sum \limits_{i = 0}^{ + \infty } P\left( {T \ge i} \right) \cdot E\left( {D_{i} |T = t} \right) \ge V \cdot \mathop \sum \limits_{i = 0}^{ + \infty } P\left( {T \ge i} \right) \\ & = V \cdot \mathop \sum \limits_{i = 0}^{ + \infty } \mathop \sum \limits_{j = i}^{ + \infty } P\left( {T = j} \right) = V \cdot \mathop \sum \limits_{i = 0}^{ + \infty } i \cdot P\left( {T = t} \right) \ge V \cdot E\left( T \right) \\ \end{aligned} $$

Thus, when \( V \ne 0 \), \( E\left( T \right) \le \frac{{d\left( {x_{0} } \right)}}{V} \).

Appendix B Proof of Theorem 1

Given l (\( \varepsilon < l < L \)), consider a subinterval \( Z^{\prime} = \left\{ {x |l < d\left( x \right) \le l + {\it \Delta} l} \right\}, \) difine a fitness difference function \( d^{\prime}\left( x \right) = d\left( x \right) - l \). Begin right point of the subinterval \( d\left( {x_{0} = l + {\it \Delta} l} \right) \), reach left subinterval through \( T|_{l}^{{l + {\it \Delta} l}} \). Average calculation time \( E\left( {T|_{l}^{{l + {\it \Delta} l}} } \right) \) satisfies when \( V^{\prime}\text{ := }min\{ E(d^{\prime}\left( {x_{t} } \right) - d^{\prime}\left( {x_{t + 1} } \right)|t \in N\} \)

$$ \begin{aligned} E\left( {T|_{l}^{{l + {\it \Delta} l}} } \right) \le \frac{{d^{\prime } \left( {x_{0} } \right)}}{V} & = \frac{{{\it \Delta} l}}{{min\{ E(d^{\prime } \left( {x_{t} } \right) - d^{\prime } \left( {x_{t + 1} } \right)|0 \le d^{\prime } \left( {x_{t} } \right) \le {\it \Delta} l\} }} \\ & = \frac{{{\it \Delta} l}}{{min\{ E(d\left( {x_{t} } \right) - d\left( {x_{t + 1} } \right)|l \le d\left( {x_{t} } \right) \le l + {\it \Delta} l\} }} \\ & = \frac{{{\it \Delta} l}}{{min\{ G\left( r \right)|l \le r \le l + {\it \Delta} l\} }} \\ \end{aligned} $$

G(r) is an increasing function, \( min\left\{ {G\left( r \right) |l \le r \le l + \Delta l} \right\} = G\left( l \right) \), thus \( E\left( {T|_{l}^{l + \Delta l} } \right) \le \frac{\Delta l}{G\left( l \right)} \).

According to Riemann integral, average calculation time of PSO-SP is

$$ E\left( {T|_{\varepsilon }^{L} } \right) = \mathop {lim}\limits_{n \to + \infty } \mathop \sum \limits_{i = 1}^{n} E\left( {T|_{{y_{i} - 1}}^{{y_{i} }} } \right) \le \mathop {lim}\limits_{\gamma \to 0} \mathop \sum \limits_{i = 1}^{n} 1/G\left( {y_{i - 1} } \right)^{{\Delta y_{i} }} = \mathop \int \nolimits_{\varepsilon }^{L} 1/G\left( r \right)^{dr} $$

The interval is separated n: \( \varepsilon = y_{0} < y_{1} < \ldots < y_{n - 1} < y_{n} = L \).

Appendix C Proof of Theorem 2

According to Riemann integral, The interval is separated to n subinterval: \( \varepsilon = y_{0} < y_{1} < \ldots < y_{n - 1} < y_{n} = L \), \( E\left( {T|_{\varepsilon }^{L} } \right) = lim_{n \to + \infty } \sum\nolimits_{i = 1}^{n} E \left( {T|_{{y_{i} - 1}}^{{y_{i} }} } \right) \) can be given while \( i = 1,2, \ldots ,n \),

$$ E\left( {T|_{{y_{i} - 1}}^{{y_{i} }} } \right) = \mathop \int \nolimits_{0}^{ + \infty } t \cdot g_{T} \left( t \right)dt = \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot \frac{{\Delta y_{i} }}{r}dr $$

\( g_{T} \left( t \right) \) means a probability density function through \( \left[ {y_{i - 1} ,y_{i} } \right] \), \( g_{{D_{t} }} \left( r \right) \) represents a probability density function of current steps and satisfies \( t = \frac{{\Delta y_{i} }}{r} \).

Cauchy inequality: \( \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot \frac{1}{r}dr \cdot \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot rdr \ge \left( {\mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right)dr} \right)^{2} = 1 \), and then \( \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot \frac{{\Delta y_{i} }}{r}dr \ge \frac{{\Delta y_{i} }}{{\mathop \smallint \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot rdr}} = \frac{{\Delta y_{i} }}{{G\left( {x_{i} } \right)}} \).

Thus it can prove:

$$ E\left( {T|_{\varepsilon }^{L} } \right) = \mathop {lim}\limits_{n \to + \infty } \mathop \sum \limits_{i = 1}^{n} E\left( {T|_{{y_{i} - 1}}^{{y_{i} }} } \right) \ge \mathop {lim}\limits_{n \to + \infty } \mathop \sum \limits_{i = 1}^{n} \frac{{\Delta y_{i} }}{{G\left( {x_{i} } \right)}} = \mathop \int \nolimits_{\varepsilon }^{L} 1/G\left( r \right)dr $$

Appendix D Proof of Lemma 2

$$ E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r} \right] = \mathop \int \nolimits_{{\delta_{1} }}^{{}} f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right)\left( {r - d\left( x \right)} \right)dx $$

while \( \delta_{1} = \{ x|d\left( x \right) < r\} \), if \( d\left( {x_{t + 1}^{\prime} } \right) = r \), probability density function of \( x_{t + 1} \) is \( f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right) \).

Let \( \delta_{2} = \{ x\left| {\left| {\left( {x - x_{t + 1}^{\prime } } \right)_{i} } \right|} \right. \le 1,i = 1,2, \ldots ,n\} \), a random process of uniform distribution in (−1, 1), if \( \left| {x_{i} } \right| > 1\left( {i = 1,2, \ldots ,n} \right) \), \( f_{u} \left( x \right) = 0\_(f_{u} \left( x \right) \) means probability density function of u). Thus, in the space of \( R^{n} - \delta_{2} = \left\{ {x\left| {\left| {\left( {x - x_{t + 1}^{\prime } } \right)_{i} } \right|} \right. > 1,\;i = 1,2, \ldots ,n} \right\} \), \( f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right) = 0 \), it satisfies

$$ E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r} \right] = \mathop \int \nolimits_{\delta }^{{}} f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right)\left( {r - d\left( x \right)} \right)dx $$

Here, \( \delta = \delta_{1} \; \cap \;\delta_{2} \). Because \( \left| x \right| \ne 0 \) and \( r\; \le \;0.5 \), there is \( \left| {{\text{d}}\left( x \right)} \right| < \left| x \right| \), that means \( \delta_{1} \; \cap \;\delta_{2} = \delta_{1} \), \( \delta = \delta_{1} . \)

\( \delta_{1} = \{ x|d\left( x \right) < r\} \) is a high-dimensional sphere, its integration satisfies

$$ \mathop \int \nolimits_{\delta }^{{}} {\text{f}}_{{{\text{x}}_{{{\text{t}} + 1}} }} \left( {{\text{x|d}}\left( {{\text{x}}_{{{\text{t}} + 1}}^{\prime} } \right) = {\text{r}}} \right)\left( {{\text{r}} - {\text{d}}\left( {\text{x}} \right)} \right){\text{dx}} = \left( {\frac{1}{{2^{\text{n}} }}} \right)\mathop \int \nolimits_{\delta }^{{}} \left( {{\text{r}} - {\text{d}}\left( {\text{x}} \right)} \right){\text{dx}} $$

According to Appendix E, when \( r \le 0.5 \), there is

$$ {\text{E}}\left[ {{\text{d}}\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) | {\text{d}}\left( {x_{t + 1}^{\prime} } \right) = r} \right] = 2^{ - n} \cdot r^{n + 1} \cdot \sqrt \pi^{n} /\left( {\Gamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)} \right) $$

Appendix E Calculate \( \mathop \int \nolimits_{\delta }^{{}} \left( {r - d\left( x \right)} \right)dx \)

According to definition B function \( B\left( {m + 1,n + 1} \right) = 2\mathop \int \nolimits_{0}^{\pi /2} \left( {cosx} \right)^{2m + 1} \left( {sinx} \right)^{2n + 1} dx \), thus \( \int_{0}^{{{\pi }/2}} {\left( {\text{sinx}} \right)^{\text{k}} } {\text{dx}} = \frac{1}{2} \cdot {\text{B}}\left( {\frac{1}{2},\frac{{{\text{k}} + 1}}{2}} \right) \), \( k \in N \).

Because \( B\left( {m,n} \right) = \varGamma \left( m \right) \cdot \varGamma \left( n \right)/\varGamma \left( {m + n} \right) \), \( \varGamma \left( {1/2} \right) = \sqrt \pi \), there is

$$ \mathop \int \nolimits_{0}^{\pi } \left( {sinx} \right)^{k} dx = 2\mathop \int \nolimits_{0}^{\pi /2} \left( {sinx} \right)^{k} dx = B\left( {\frac{1}{2}, \frac{k + 1}{2}} \right) = \sqrt \pi \cdot \frac{{\varGamma \left( {\frac{k}{2} + \frac{1}{2}} \right)}}{{\varGamma \left( {\frac{k}{2} + 1} \right)}} $$

\( l \) replaces \( r \) in order to calculate easily \( \mathop \int \nolimits_{\delta }^{{}} \left( {r - d\left( x \right)} \right)dx \),

$$ \begin{aligned} & \mathop \int \nolimits_{r = 0}^{l} \mathop \int \nolimits_{{\beta_{n} = 0}}^{\pi } \mathop \int \nolimits_{{\beta_{n - 1} = 0}}^{\pi }\,\ldots\,\mathop \int \nolimits_{{\beta_{3} = 0}}^{\pi } \mathop \int \nolimits_{\alpha = 0}^{2\pi } \left( {l - r} \right)dr(rsin\beta_{n}\,\ldots\,sin\beta_{3} d\alpha )\,\ldots\,\left( {rd\beta_{n} } \right) \\ & \quad = \mathop \int \nolimits_{r = 0}^{l} \mathop \int \nolimits_{{\beta_{n} = 0}}^{\pi } \mathop \int \nolimits_{{\beta_{n - 1} = 0}}^{\pi }\,\ldots\,\mathop \int \nolimits_{{\beta_{3} = 0}}^{\pi } \mathop \int \nolimits_{\alpha = 0}^{2\pi } \left( {l - r} \right)dr(rsin\beta_{n}\,\ldots\,sin\beta_{3} d\alpha )\left( {rsin\beta_{n}\,\ldots\,sin\beta_{4} d\beta_{3} } \right)\,\ldots\,\left( {rd\beta_{n} } \right) \\ & \quad = l\mathop \int \nolimits_{r = 0}^{l} \mathop \int \nolimits_{{\beta_{n} = 0}}^{\pi } \mathop \int \nolimits_{{\beta_{n - 1} = 0}}^{\pi }\,\ldots\,\mathop \int \nolimits_{{\beta_{3} = 0}}^{\pi } \mathop \int \nolimits_{\alpha = 0}^{2\pi } dr\left( {rsin\beta_{n}\,\ldots\,sin\beta_{3} d\alpha } \right)\left( {rsin\beta_{n}\,\ldots\,sin\beta_{4} d\beta_{3} } \right)\,\ldots\,\left( {rd\beta_{n} } \right) \\ & \quad - \mathop \int \nolimits_{r = 0}^{l} \mathop \int \nolimits_{{\beta_{n} = 0}}^{\pi } \mathop \int \nolimits_{{\beta_{n - 1} = 0}}^{\pi }\,\ldots\,\mathop \int \nolimits_{{\beta_{3} = 0}}^{\pi } \mathop \int \nolimits_{\alpha = 0}^{2\pi } dr(rsin\beta_{n}\,\ldots\,sin\beta_{3} d\alpha )\left( {rsin\beta_{n}\,\ldots\,sin\beta_{4} d\beta_{3} } \right)\,\ldots\,\left( {rd\beta_{n} } \right) \\ & \quad = l\mathop \int \nolimits_{r = 0}^{l} dr \cdot r^{n - 1} \cdot 2\pi \cdot \mathop \prod \limits_{i = 0}^{n - 2} \mathop \int \nolimits_{\beta = 0}^{\pi } \left( {sin\beta } \right)^{i} d\beta - \mathop \int \nolimits_{r = 0}^{l} rdr \cdot r^{n - 1} \cdot 2\pi \\ & \quad \cdot \mathop \prod \limits_{i = 1}^{n - 2} \mathop \int \nolimits_{\beta = 0}^{\pi } \left( {sin\beta } \right)^{i} d\beta \\ & \quad = l\mathop \int \nolimits_{r = 0}^{l} dr \cdot r^{n - 1} \cdot 2\pi \cdot \mathop \prod \limits_{i = 1}^{n - 2} B\left( {\frac{1}{2},\frac{i + 1}{2}} \right) - \mathop \int \nolimits_{r = 0}^{l} rdr \cdot r^{n - 1} \cdot 2\pi \cdot \mathop \prod \limits_{i = 1}^{n - 2} B\left( {\frac{1}{2},\frac{i + 1}{2}} \right) \\ & \quad = l\mathop \int \nolimits_{r = 0}^{l} dr \cdot r^{n - 1} \cdot 2\sqrt \pi^{n} /\varGamma \left( {n/2} \right) - \mathop \int \nolimits_{r = 0}^{l} rdr \cdot r^{n - 1} \cdot 2\sqrt \pi^{n} /\varGamma \left( {n/2} \right) \\ & \quad = l \cdot \sqrt \pi^{n} \cdot l^{n} /\varGamma \left( {n/2 + 1} \right) - 1/\left( {n + 1} \right) \cdot l^{n + 1} \cdot 2\sqrt \pi^{n} /\varGamma \left( {n/2} \right) \\ & \quad = l^{n + 1} \cdot \sqrt \pi^{n} /\varGamma \left( {n/2} \right) \cdot 2/n - 1/\left( {n + 1} \right) \cdot l^{n + 1} \cdot 2\sqrt \pi^{n} /\varGamma \left( {n/2} \right) \\ & \quad = l^{n + 1} \cdot \sqrt \pi^{n} /\left( {\varGamma \left( {n/2 + 1} \right) \cdot \left( {n + 1} \right)} \right) \\ \end{aligned} $$

Appendix F Proof of Lemma 3

$$ \begin{aligned} & E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r} \right] - E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r^{{\prime }} } \right] \\ & \qquad \qquad \qquad - \left( {r - r^{{\prime }} } \right) = \frac{{r^{n + 1} - r^{{{\prime }n + 1}} }}{{2^{n} \cdot \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)}} - \left( {r - r^{{\prime }} } \right) \\ & \qquad \qquad \qquad = \frac{{\left( {r - r^{{\prime }} } \right) \cdot \mathop \sum \nolimits_{i = 1}^{n} r^{n - i} \cdot r^{{{\prime }i - 1}} }}{{2^{n} \cdot \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)}} - \left( {r - r^{{\prime }} } \right) \\ & \qquad \qquad \qquad = \left( {\frac{{\mathop \sum \nolimits_{i = 1}^{n} r^{n - i} \cdot r^{{{\prime }i - 1}} }}{{2^{n} \cdot \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)}} - 1} \right) \cdot \left( {r - r^{{\prime }} } \right) \\ \end{aligned} $$

Obviously:

$$ \frac{{\sum\nolimits_{i = 1}^{n} {r^{n - i} } \cdot r^{\prime i - 1} }}{{2^{n} \cdot \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)}} - 1\; < \;0 $$

Therefore:

$$ \begin{aligned} & E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r} \right] - E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r^{{\prime }} } \right] \\ & \qquad \qquad \qquad - \left( {r - r^{{\prime }} } \right) = \left( {\frac{{\mathop \sum \nolimits_{i = 1}^{n} r^{n - i} \cdot r^{{{\prime }i - 1}} }}{{2^{n} \cdot \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)}} - 1} \right) \cdot \left( {r - r^{{\prime }} } \right) \le 0 \\ \end{aligned} $$
$$ E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r^{{\prime }} } \right] + r - r^{{\prime }} \ge E\left[ {d\left( {x_{t + 1}^{\prime} } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime} } \right) = r} \right] $$

Appendix G Proof of Theorem 3

$$ \begin{aligned} {\text{E}}\left( {{\text{T}}|_{\upvarepsilon}^{0.5} } \right) \le \mathop \int \nolimits_{\upvarepsilon}^{0.5} \frac{1}{{{\text{G}}_{\text{l}} \left( {\text{r}} \right)}}{\text{dr}} & = \mathop \int \nolimits_{\upvarepsilon}^{0.5}\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \left( {{\text{n}} + 1} \right) \cdot {\text{r}}^{{ - \left( {{\text{n}} + 1} \right)}} \cdot \frac{{\sqrt\uppi^{{ - {\text{n}}}} }}{{2^{{ - {\text{n}}}} }}{\text{dr}} \\ & =\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \left( {{\text{n}} + 1} \right) \cdot \mathop \int \nolimits_{\upvarepsilon}^{0.5} {\text{r}}^{{ - \left( {{\text{n}} + 1} \right)}} \cdot \sqrt\uppi^{{ - {\text{n}}}} \cdot 2^{\text{n}} {\text{dr}} \\ & =\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \left( {{\text{n}} + 1} \right) \cdot \sqrt\uppi^{{ - {\text{n}}}} \cdot 2^{\text{n}} \cdot \mathop \int \nolimits_{\upvarepsilon}^{0.5} {\text{r}}^{{ - \left( {{\text{n}} + 1} \right)}} {\text{dr}} \\ & =\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \frac{{\left( {{\text{n}} + 1} \right)}}{\text{n}} \cdot \sqrt\uppi^{{ - {\text{n}}}} \cdot 2^{\text{n}} \cdot \left( { - \frac{1}{\text{n}} \cdot {\text{r}}^{{ - {\text{n}}}} |_{{{\text{r}} =\upvarepsilon}}^{0.5} } \right) \\ & =\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \frac{{\left( {{\text{n}} + 1} \right)}}{\text{n}} \cdot \sqrt\uppi^{{ - {\text{n}}}} \cdot 2^{\text{n}} \cdot \left( {\upvarepsilon^{{ - {\text{n}}}} - 0.5^{{ - {\text{n}}}} } \right) \\ & =\Gamma \left( {\frac{\text{n}}{2} + 1} \right) \cdot \left( {{\text{n}} + 1} \right)/{\text{n}} \cdot 2^{\text{n}} /\sqrt\uppi^{\text{n}} \cdot \left( {\frac{1}{{\upvarepsilon^{\text{n}} }} - \frac{1}{{0.5^{\text{n}} }}} \right) \\ \end{aligned} $$

Appendix H Proof of Lemma 5

$$ E\left[ {d\left( {x_{t + 1}^{\prime } } \right) - d\left( {x_{t + 1} } \right) |d\left( {x_{t + 1}^{\prime } } \right) = r} \right] = \int_{\delta }^{{}} {f_{{x_{t + 1} }} } \left( {x|d\left( {x_{t + 1}^{\prime } } \right) = r} \right)\left( {r - d\left( x \right)} \right)dx\;\;\left( {{\text{H}}1} \right) $$

While \( \delta = \{ x|d\left( x \right) < r\} \), if \( d\left( {x_{t + 1}^{\prime} } \right) = r \), \( f_{{x_{t + 1} }} (x|d\left( {x_{t + 1}^{\prime} } \right) = r) \) represents probability density function of \( x_{t + 1} \).

While \( d\left( {x_{t + 1} } \right) < d\left( {x_{t + 1}^{\prime} } \right) = r \), \( x_{t + 1} \) belongs to \( \delta \), \( x_{t + 1} = x_{t + 1}^{\prime} + u \), thus \( f_{{x_{t + 1} }} (x|d\left( {x_{t + 1}^{\prime} } \right) = r) \) equals to \( f_{u} \left( x \right) \). While \( u = x_{t + 1} - x_{t + 1}^{\prime} \), \( f_{u} \left( x \right) \) is probability density function of \( u \).

$$ f_{u} \left( x \right) = \mathop \prod \limits_{i = 1}^{n} f_{2} \left( {x_{i} } \right) = \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} \cdot e^{{ - \frac{{d\left( x \right)^{2} }}{2}}} $$

\( f_{u} \left( x \right) \) decreases when u decreases, \( 0 \le d\left( u \right) \le 2d\left( {x_{t + 1}^{\prime} } \right) = 2r \), thus

$$ \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} \cdot e^{{ - \frac{{d\left( x \right)^{2} }}{2}}} \le f_{u} \left( x \right) \le \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} $$
$$ \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} \cdot e^{{ - \frac{{d\left( x \right)^{2} }}{2}}} \le f_{{x_{t + 1} }} (x|d\left( {x_{t + 1}^{\prime} } \right) = r) \le \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} $$

Into H1, there is

$$ \begin{aligned} & \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} \cdot e^{{ - \frac{{d\left( x \right)^{2} }}{2}}} \cdot \int_{\delta }^{{}} {f_{{x_{t + 1} }} } \left( {x|d\left( {x_{t + 1}^{\prime } } \right) = r} \right)dx \le E\left[ {d\left( {x_{t + 1}^{\prime } } \right) - d\left( {x_{t + 1} } \right)|d\left( {x_{t + 1}^{\prime } } \right) = r} \right] \\ & \qquad \qquad \quad \le \frac{1}{{\left( {\sqrt {2\pi } } \right)^{n} }} \cdot \int_{\delta }^{{}} {f_{{x_{t + 1} }} } \left( {x|d\left( {x_{t + 1}^{\prime } } \right) = r} \right)dx \\ \end{aligned} $$

Combine Appendix E, it can obtain

$$ \begin{aligned} & \frac{{e^{{ - \frac{{r^{2} }}{2}}} \cdot r^{n + 1} }}{{\varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {\sqrt 2 } \right)^{n} \cdot \left( {n + 1} \right)}} \le E\left[ {d\left( {x_{t + 1}^{\prime } } \right) - d\left( {x_{t + 1} } \right)|d\left( {x_{t + 1}^{\prime } } \right) = r} \right] \\ & \qquad \qquad \qquad \le \frac{{r^{n + 1} }}{{\varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {\sqrt 2 } \right)^{n} \cdot \left( {n + 1} \right)}} \\ \end{aligned} $$

Appendix I Proof of Theorem 4

$$ \begin{aligned} E\left( {T|_{\varepsilon }^{0.5} } \right) & \le \int_{\varepsilon }^{0.5} {\frac{1}{{G_{l} \left( r \right)}}\;} dr \le \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right) \cdot \left( {\sqrt 2 } \right)^{n} \int_{\varepsilon }^{0.5} {r^{{ - \left( {n + 1} \right)}} } \cdot e^{{2r^{2} }} dr \\ & < \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right) \cdot \left( {\sqrt 2 } \right)^{n} \cdot e^{2} \cdot \int_{\varepsilon }^{0.5} {r^{{ - \left( {n + 1} \right)}} } dr \\ & = \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right) \cdot \left( {\sqrt 2 } \right)^{n} \cdot e^{2} \cdot \left( {\frac{ - 1}{{n \cdot \left( {0.5} \right)^{n} }} + \frac{1}{{n \cdot \;\varepsilon^{n} }}} \right) \\ & < \varGamma \left( {\frac{n}{2} + 1} \right) \cdot \left( {n + 1} \right)/n \cdot \left( {\sqrt 2 } \right)^{n} \cdot \frac{1}{{n \cdot \;\varepsilon^{n} }} \\ \end{aligned} $$

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Hongyue, W., Han, H., Shuling, Y., Yushan, Z. (2017). Running-Time Analysis of Particle Swarm Optimization with a Single Particle Based on Average Gain. In: Shi, Y., et al. Simulated Evolution and Learning. SEAL 2017. Lecture Notes in Computer Science(), vol 10593. Springer, Cham. https://doi.org/10.1007/978-3-319-68759-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68759-9_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68758-2

  • Online ISBN: 978-3-319-68759-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics