Abstract
Running-time analysis of the particle swarm optimization (PSO) is a hard study in the field of swarm intelligence, especially for the PSO whose solution and velocity are encoded continuously. In this study, running-time analysis on particle swarm optimization with a single particle (PSO-SP) is analyzed. Elite selection strategy and stochastic disturbance are combined into PSO-SP in order to improve optimization capacity and adjust the direction of the velocity of the single particle. Running-time analysis on PSO-SP based on the average gain model is applied in two different situations including uniform distribution and standard normal distribution. The theoretical results show running-time of the PSO-SP with stochastic disturbance of both distributions is exponential. Besides, in the same accuracy and the same fitness difference value, running-time of the PSO-SP with stochastic disturbance of uniform distribution is better than that of standard normal distribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Jägersküpper, J.: How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms. Theor. Comput. Sci. 361(1), 38–56 (2006)
Huang, H., Xu, W.D., Zhang, Y.S., Lin, Z.Y., Hao, Z.F.: Runtime analysis for continuous (1 + 1) evolutionary algorithm based on average gain model. Sci. China 44, 811–824 (2014)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: 1995 IEEE International Conference on Neural Networks, pp. 1942–1948 (1995)
Yao, X., Xu, Y.: Recent advances in evolutionary computation. J. Comput. Sci. Technol. 21, 1–18 (2006)
Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence, pp. 69–73. IEEE (1998)
He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127(1), 57–85 (2002)
He, J., Yao, X.: From an individual to a population: an analysis of the first hitting time of population-based evolutionary algorithms. IEEE Trans. Evol. Comput. 6(5), 495–511 (2008)
Oliveto, Pietro S., Witt, C.: Simplified drift analysis for proving lower bounds in evolutionary computation. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 82–91. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87700-4_9
Gutjahr, W.J.: First steps to the runtime complexity analysis of ant colony optimization. Comput. Oper. Res. 35(9), 2711–2727 (2008)
Doerr, B., Neumann, F., Sudholt, D., Witt, C.: On the runtime analysis of the 1-ANT ACO algorithm. In: Conference on Genetic and Evolutionary Computation. vol. 65, pp. 33–40. ACM (2007)
Sudholt, D., Witt, C.: Runtime analysis of binary PSO. In: Conference on Genetic and Evolutionary Computation, pp. 135–142. ACM (2008)
Qian, C., Yu, Y., Zhou, Z.H.: An analysis on recombination in multi-objective evolutionary optimization. Artif. Intell. 204(204), 99–119 (2013)
Qian, C., Tang, K., Zhou, Z.-H.: Selection hyper-heuristics can provably be helpful in evolutionary multi-objective optimization. In: Handl, J., Hart, E., Lewis, P.R., López-Ibáñez, M., Ochoa, G., Paechter, B. (eds.) PPSN 2016. LNCS, vol. 9921, pp. 835–846. Springer, Cham (2016). doi:10.1007/978-3-319-45823-6_78
Qian, C., Yu, Y., Zhou, Z.H.: Analyzing evolutionary optimization in noisy environments. Evol. Comput. 1 (2013)
Qian, C., Yu, Y., Jin, Y., Zhou, Z.H.: On the effectiveness of sampling for evolutionary optimization in noisy environments. In: Parallel Problem Solving from Nature – PPSN XIII. Springer, Heidelberg, pp. 33–55 (2014)
He, J., Yao, X.: Average drift analysis and population scalability. IEEE Trans. Evol. Comput. 21(3), 426–439 (2017)
Rowe, J.E., Sudholt, D.: The choice of the offspring population size in the (1, λ) EA. Theor. Comput. Sci. 545(545), 20–38 (2014)
Witt, C.: Why standard particle swarm optimisers elude a theoretical runtime analysis. In: ACM SIGEVO International Workshop on Foundations of Genetic Algorithms, FOGA 2009, Proceedings, Orlando, Florida, USA, January 9–11, 2009, pp. 13–20. DBLP (2009)
Lehre, P.K., Witt, C.: Finite first hitting time versus stochastic convergence in particle swarm optimisation. 53, 1–20 (2011)
Acknowledgement
This work is supported by National Natural Science Foundation of China (61370102), Guangdong Natural Science Funds for Distinguished Young Scholar (2014A030306050), the Ministry of Education - China Mobile Research Funds (MCM20160206) and Guangdong High-level personnel of special support program (2014TQ01X664).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix A Proof of Lemma 1
\( T|_{0}^{{d\left( {x_{0} } \right)}} \) is simply equal to T, the probability of calculation time t is \( P\left( {T = t} \right) \) and the difference of fitness difference is \( D_{t} = d\left( {x_{t} } \right) - d\left( {x_{t + 1} } \right) \), \( {\text{d}}\left( {x_{0} } \right) \) satisfies
Thus, when \( V \ne 0 \), \( E\left( T \right) \le \frac{{d\left( {x_{0} } \right)}}{V} \).
Appendix B Proof of Theorem 1
Given l (\( \varepsilon < l < L \)), consider a subinterval \( Z^{\prime} = \left\{ {x |l < d\left( x \right) \le l + {\it \Delta} l} \right\}, \) difine a fitness difference function \( d^{\prime}\left( x \right) = d\left( x \right) - l \). Begin right point of the subinterval \( d\left( {x_{0} = l + {\it \Delta} l} \right) \), reach left subinterval through \( T|_{l}^{{l + {\it \Delta} l}} \). Average calculation time \( E\left( {T|_{l}^{{l + {\it \Delta} l}} } \right) \) satisfies when \( V^{\prime}\text{ := }min\{ E(d^{\prime}\left( {x_{t} } \right) - d^{\prime}\left( {x_{t + 1} } \right)|t \in N\} \)
G(r) is an increasing function, \( min\left\{ {G\left( r \right) |l \le r \le l + \Delta l} \right\} = G\left( l \right) \), thus \( E\left( {T|_{l}^{l + \Delta l} } \right) \le \frac{\Delta l}{G\left( l \right)} \).
According to Riemann integral, average calculation time of PSO-SP is
The interval is separated n: \( \varepsilon = y_{0} < y_{1} < \ldots < y_{n - 1} < y_{n} = L \).
Appendix C Proof of Theorem 2
According to Riemann integral, The interval is separated to n subinterval: \( \varepsilon = y_{0} < y_{1} < \ldots < y_{n - 1} < y_{n} = L \), \( E\left( {T|_{\varepsilon }^{L} } \right) = lim_{n \to + \infty } \sum\nolimits_{i = 1}^{n} E \left( {T|_{{y_{i} - 1}}^{{y_{i} }} } \right) \) can be given while \( i = 1,2, \ldots ,n \),
\( g_{T} \left( t \right) \) means a probability density function through \( \left[ {y_{i - 1} ,y_{i} } \right] \), \( g_{{D_{t} }} \left( r \right) \) represents a probability density function of current steps and satisfies \( t = \frac{{\Delta y_{i} }}{r} \).
Cauchy inequality: \( \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot \frac{1}{r}dr \cdot \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot rdr \ge \left( {\mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right)dr} \right)^{2} = 1 \), and then \( \mathop \int \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot \frac{{\Delta y_{i} }}{r}dr \ge \frac{{\Delta y_{i} }}{{\mathop \smallint \nolimits_{0}^{ + \infty } g_{{D_{t} }} \left( r \right) \cdot rdr}} = \frac{{\Delta y_{i} }}{{G\left( {x_{i} } \right)}} \).
Thus it can prove:
Appendix D Proof of Lemma 2
while \( \delta_{1} = \{ x|d\left( x \right) < r\} \), if \( d\left( {x_{t + 1}^{\prime} } \right) = r \), probability density function of \( x_{t + 1} \) is \( f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right) \).
Let \( \delta_{2} = \{ x\left| {\left| {\left( {x - x_{t + 1}^{\prime } } \right)_{i} } \right|} \right. \le 1,i = 1,2, \ldots ,n\} \), a random process of uniform distribution in (−1, 1), if \( \left| {x_{i} } \right| > 1\left( {i = 1,2, \ldots ,n} \right) \), \( f_{u} \left( x \right) = 0\_(f_{u} \left( x \right) \) means probability density function of u). Thus, in the space of \( R^{n} - \delta_{2} = \left\{ {x\left| {\left| {\left( {x - x_{t + 1}^{\prime } } \right)_{i} } \right|} \right. > 1,\;i = 1,2, \ldots ,n} \right\} \), \( f_{{x_{t + 1} }} \left( {x |d\left( {x_{t + 1}^{\prime} } \right) = r} \right) = 0 \), it satisfies
Here, \( \delta = \delta_{1} \; \cap \;\delta_{2} \). Because \( \left| x \right| \ne 0 \) and \( r\; \le \;0.5 \), there is \( \left| {{\text{d}}\left( x \right)} \right| < \left| x \right| \), that means \( \delta_{1} \; \cap \;\delta_{2} = \delta_{1} \), \( \delta = \delta_{1} . \)
\( \delta_{1} = \{ x|d\left( x \right) < r\} \) is a high-dimensional sphere, its integration satisfies
According to Appendix E, when \( r \le 0.5 \), there is
Appendix E Calculate \( \mathop \int \nolimits_{\delta }^{{}} \left( {r - d\left( x \right)} \right)dx \)
According to definition B function \( B\left( {m + 1,n + 1} \right) = 2\mathop \int \nolimits_{0}^{\pi /2} \left( {cosx} \right)^{2m + 1} \left( {sinx} \right)^{2n + 1} dx \), thus \( \int_{0}^{{{\pi }/2}} {\left( {\text{sinx}} \right)^{\text{k}} } {\text{dx}} = \frac{1}{2} \cdot {\text{B}}\left( {\frac{1}{2},\frac{{{\text{k}} + 1}}{2}} \right) \), \( k \in N \).
Because \( B\left( {m,n} \right) = \varGamma \left( m \right) \cdot \varGamma \left( n \right)/\varGamma \left( {m + n} \right) \), \( \varGamma \left( {1/2} \right) = \sqrt \pi \), there is
\( l \) replaces \( r \) in order to calculate easily \( \mathop \int \nolimits_{\delta }^{{}} \left( {r - d\left( x \right)} \right)dx \),
Appendix F Proof of Lemma 3
Obviously:
Therefore:
Appendix G Proof of Theorem 3
Appendix H Proof of Lemma 5
While \( \delta = \{ x|d\left( x \right) < r\} \), if \( d\left( {x_{t + 1}^{\prime} } \right) = r \), \( f_{{x_{t + 1} }} (x|d\left( {x_{t + 1}^{\prime} } \right) = r) \) represents probability density function of \( x_{t + 1} \).
While \( d\left( {x_{t + 1} } \right) < d\left( {x_{t + 1}^{\prime} } \right) = r \), \( x_{t + 1} \) belongs to \( \delta \), \( x_{t + 1} = x_{t + 1}^{\prime} + u \), thus \( f_{{x_{t + 1} }} (x|d\left( {x_{t + 1}^{\prime} } \right) = r) \) equals to \( f_{u} \left( x \right) \). While \( u = x_{t + 1} - x_{t + 1}^{\prime} \), \( f_{u} \left( x \right) \) is probability density function of \( u \).
\( f_{u} \left( x \right) \) decreases when u decreases, \( 0 \le d\left( u \right) \le 2d\left( {x_{t + 1}^{\prime} } \right) = 2r \), thus
Into H1, there is
Combine Appendix E, it can obtain
Appendix I Proof of Theorem 4
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Hongyue, W., Han, H., Shuling, Y., Yushan, Z. (2017). Running-Time Analysis of Particle Swarm Optimization with a Single Particle Based on Average Gain. In: Shi, Y., et al. Simulated Evolution and Learning. SEAL 2017. Lecture Notes in Computer Science(), vol 10593. Springer, Cham. https://doi.org/10.1007/978-3-319-68759-9_42
Download citation
DOI: https://doi.org/10.1007/978-3-319-68759-9_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68758-2
Online ISBN: 978-3-319-68759-9
eBook Packages: Computer ScienceComputer Science (R0)