Abstract
A novel Attractive and Repulsive Fully Informed Particle Swarm Optimization based on the modified Fitness Model (ARFIPSOMF) is presented. In ARFIPSOMF, a modified fitness model is used as a self-organizing population structure construction mechanism. The population structure is gradually generated as the construction and the optimization processes progress asynchronously. An attractive and repulsive interacting mechanism is also introduced. The cognitive and the social effects on each particle are distributed by its ‘contextual fitness’ value \(F\). Two kinds of experiments are conducted. Results focusing on the optimization performance show that the proposed algorithm maintains stronger diversity of the population during the convergent process, resulting in good solution quality on a wide range of test functions, and converge faster. Moreover, the results concerning on topologic characteristics of the population structure indicate that (1) the final population structures developed by optimizing different test functions differ, which is an important for improving ARFIPSOMF performance, and (2) the final structures developed by optimizing some test functions exhibit scale-free property approximately.
Access this article
Rent this article via DeepDyve
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig2_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig3_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig4_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig5_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig6_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig7_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig8_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig9_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig10_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig11_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig12_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig13_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig14_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig15_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig16_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig17_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig18_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig19_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00500-014-1546-8/MediaObjects/500_2014_1546_Fig20_HTML.gif)
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Barabasi AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512
Bianconi G, Barabasi AL (2001) Competition and multiscaling in evolving networks. Europhys Lett 54(4):436–442
Eguiluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV (2005) Scale-free brain functional networks. Phys Rev Lett 94(1)
El-Abd M, Kamel M (2005) Information exchange in multiple cooperating swarms. In: Proceedings of the 2005IEEE swarm intelligence symposium (SIS2005), Pasadena, pp 138–142
Fierro R, Castillo O, Valdez F, Cervantes L (2013) Design of optimal membership functions for fuzzy controllers of the water tank and inverted pendulum with PSO variants. IFSA/NAFIPS, pp 1068–1073
Giacobini M, Preuss M, Tomassini M (2006) Effects of scale-free and small-world topologies on binary coded self-adaptive CEA. In: Proceedings of evolutionary computation combinatorial optimization, pp 86–98
Janson S, Middendorf M (2005) A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans Syst Man Cybern B 35(6):1272–1282
Jeong H, Mason SP, Barabási AL, Oltvai ZN (2001) Lethality and centrality in protein networks. Nature 411:41–42
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks. IEEE Service Center, Piscateway, pp 1942–1948
Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of congress evolutionary computation (CEC 2002), Hawaii, pp 1671–1676
Kirley M, Stewart R (2007a) An analysis of the effects of population structure on scalable multiobjective optimization problems. In: Proceedings of genetic evolutionary computation conference (GECCO07), pp 845–852
Kirley M, Stewart R (2007b) Multiobjective optimization on complex networks. In: Proceedings of 4th international conference on evolutionary multicriterion optimization (LNCS), pp 81–95
Maldonado Y, Castillo O, Melin P (2013) Particle swarm optimization of interval type-2 fuzzy systems for FPGA applications. Appl Soft Comput 13(1):496–508
Melin P, Olivas F, Castillo O, Valdez F, Soria Jose, Valdez José Mario García (2013) Optimal design of fuzzy classification systems using PSO with dynamic parameter adaptation through fuzzy logic. Expert Syst Appl 40(8):3196–3206
Mendes R (2004) Population topologies and their influence in particle swarm performance: [dissertation]. University of Minho, Lisbon
Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 7(8):204–210
Mo S, Zeng J (2012) Particle Swarm Optimization based on self-organization topology driven by fitness with different removing link strategies. Int J Innov Comput Appl 4(2):119–132
Niu B, Zhu YL et al (2006) An improved particle swarm optimization based on bacterial chemotaxis. In: Proceedings of the 6th world congress on intelligent control and automation, Dalian, pp 3193–3197
Riget J, Vestterstrom JS (2002) A diversity-guided particle swarm optimizer-the ARPSO. Department of Computer Science, University of Aarhus, Denmark
Silva A, Neves, Costa E (2002) An empirical comparison of particle swarm and predator prey optimisation. Lecture Notes in Artificial Intelligence, pp 103–110
Solis F, Wets R (1981) Minimization by random search techniques. Math Oper Res 6(1):19–30
Spears DF, Kerr W et al (2004) An overview of physicomimetics. Lecture Notes in Computer Science-State of the Art Series, pp 84–97
Valdez F, Melin P, Castillo O (2014) A survey on nature-inspired optimization algorithms with fuzzy logic for dynamic parameter adaptation. Expert Syst Appl 41(14):6459–6466
Wang YX, Xiang QL, Mao JY (2008) Particle swarms with dynamic ring topology. In: IEEE congress on evolutionary computation, pp 419–423
Whitacre JM, Sarker RA, Pham QT (2008) The self-organization of interaction networks for nature-inspired optimization. IEEE Trans Evol Comput 12(2):220–230
Zhang C, Yi Z (2011) Scale-free fully informed particle swarm optimization algorithm. Inf Sci 181(20):4550–4568
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by V. Loia.
This work is supported by Youth Foundation of Shanxi Province under Grant No. 2012021012-5 and Research Foundation for the Doctoral Program of Taiyuan University of Science and Technology under Grant No. 20122055.
Appendix A
Appendix A
1.1 A.1 Convergence analysis of ARFIPSO
To analyze the convergence of ARFIPSO, the theory of stability of linear system is used in this section. For analysis purpose, consider the situation that \(P_g\) keeps constant during a period of time. Due to particle \(i\) chosen randomly, results can be applied to all other particles. In addition, due to each dimension updated independently from others, without loss of generality, the one-dimensional case is used for convergence analysis of ARFIPSO. The Eqs. (8) and (9) can be transformed as:
Then, Eq. (19) becomes:
Substituting Eqs. (20) and (21) into Eq. (22), the iterative process is obtained as follows:
Equation (22) can be viewed as a two-order discrete system with \( ((1 - F_{i} (t)c_{i} )(\sum \nolimits _{{j \in {{B}}(i)}} {r_{j} } - \sum \nolimits _{{j \in {{W}}(i)}} {r_{j} } ){\text {)}}p_{j} { + {F}}_{{{i}}} (t)\beta r_{g} p_{g} + {{F}}_{{{i}}} (t)\beta r_{j} p_{i}\) as the input.
To analyze the convergent condition of sequence \(\{Ex_i (t)\}\) \((Ex_i (t)\) is the expectation of random variable \(x_i (t))\), Eq. (23) is obtained from Eq. (22):
Let \(\phi =\frac{1}{2}((1-F_i (t)c_i )(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| ))+\frac{1}{2}{F}_i (t)\beta +\frac{1}{2}{F}_i (t)\beta )\).
The characteristic equation of the iterative process shown in Eq. (23) is
According to the theory of stability of linear system, the convergent condition of iterative process \(\{Ex_i (t)\}\) is that absolute values of eigenvalues \(\lambda _{1}\) and \(\lambda _{2}\) are both less than 1. That is,
Thus, the convergent condition of iterative process \(\{Ex_i (t)\}\) is written as:
Next, the Eq. (26) is analyzed according the value of \(F_i \) in detail:
-
1.
If \(F_i =1\), then \(0<F_i \beta <2(1+w)\). So, particle \(i\) is convergent when \(0<\beta <2(1+w)\).
-
2.
If \(F_i =0\), then \(0<c_i \times (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<4(1+w)\). Further analysis is given as follows:
-
1)
If particle \(i\) satisfies \(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| \le 0\), then it is divergent.
-
2)
If particle \(i\) satisfies \(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| >0\), then \(0<c_i <\frac{4(1+w)}{(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )}\). Moreover, \(0\le w<1, \min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=1, \min \,4(1+w)=4\) and \(\max \,4(1+w)=8\). So,
-
[1
] When \(c_i \ge 8\), \(c_i =k_i \ge 8\ge \frac{\max 4(1+w)}{\min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )}\). Thus, particle \(i\) is divergent;
-
[2
] When \(c_i \le 2\), i.e., \(k_i \le 2\), then \(\max (\left| {n(B(i))} \right| \!-\!\left| {n(W(i))} \right| )\!=\!2\) and \(c_i \!=\!k_i \!\le \! 2\!\le \! \frac{\min \,4(1\!+\!w)}{\max \,(\left| {n(B(i))} \right| \!-\!\left| {n(W(i))} \right| )}\). Thus, the particle \(i\) is convergent;
-
[3
] \(2<c_i =k_i <8\). Thus, particle \(i\) may be convergent or divergent.
-
1)
-
3.
If \(0<F_i <1\) then \(0<(1-F_i ) \times c_i \times (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )+2F_i \beta <4(1+\omega )\).
Further analysis is given as follows:
-
1.
When \(\max (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=2\) and \(c_i =\beta \), because \(\min \,4(1+\omega )=4\), then \(0<c_i \times \max (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<4\). Thus, when \(0<c_i =k_i \le 2\), particle \(i\) is convergent
-
2.
When \(\min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=1\) and \(c_i =\beta \), because \(\max \,4(1+\omega )=8\) then \(0<c_i \times \min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<8\). Thus, when \(c_i =k_i \ge 8\), particle \(i\) is divergent
-
3.
When \(2<c_i =k_i <8\) particle \(i\) may be convergent or divergent.
To summarize, when the acceleration coefficient \(c\) is set as the smaller value, nodes with fewer connections are more likely to converge; when the acceleration coefficient \(c\) is set as the larger value, the nodes with more connections are more likely to diverge. Thus, the convergence analysis of ARFIPSO can be used to determine the parameters setting in next experiments.
1.2 A.2 Global convergence analysis of ARFIPSOMF
Solis and Wets (1981) provide the conditions which the stochastic optimization algorithm converges at the global optimum with probability 1. Major conclusions are summarized as follows:
Hypothesis 1
If \(f(D(z,\xi ))\le f(z)\) and \(\xi \in \Omega \), then \(f(D(z,\xi ))\le f(\xi )\)
Here, \(D\) is the function for generating problem solutions, \(\xi \) is a random vector from probability space \((R^n,B,u_k )\), \(f\) is an objective function, \(\Omega \) is the constraint solution space of the problem and \(\Omega \subseteq R^n\), \(u_k \) is the probability measure on \(B\) and \(B\) is the \(\sigma \) field of \(R^n\) subset.
Hypothesis 2
If for \(\forall A(A\)is Borel set of\(\Omega )\), \(v(A)>0\), then
\(v(A)\) is \(n\)-dimension closure of \(A\),\(u_k (A)\) is the probability of \(u_k\) generating \(A\).
Theorem 1
If \(f\) is a measurable function, \(\Omega \) is a measurable subset of \(R^n\) and \(\left\{ {z_k } \right\} _0^\infty \) is the solution sequence generated by stochastic optimization algorithm, then the following formula (28) is found when Hypotheses 1 and 2 are satisfied:
\(R_\varepsilon \) is the global optimum set.
Function \(D\) is defined as:
We can prove that the formula (29) satisfies Hypothesis 1. Furthermore, when ARFIPSOMF searches stagnantly, for new particle \(i\), \(M_{i,t} =\Omega \) and for other particle \(l\),
Thus, \( \Omega \subseteq \mathop {{U}}\nolimits _{\begin{array}{c} l = 1 \\ l \ne i \end{array}}^{{N(t)}} \;M_{{l,t}} \;{{U}}\;M_{{i,t}}\).
Set \(A=M_{i,t} \), where \(A\) is the Borel subset of \(\Omega \). Thus, \(\nu [A]>0\) and \(\mu _t [A]=\sum \nolimits _{i=0}^{N(t)} {\mu _{i,t} [A]=1} \). Therefore, Hypothesis 2 is satisfied. Because Hypotheses 1 and 2 and Theorem 1 are satisfied, ARFIPSOMF can converge at the global optimum with probability 1.
Rights and permissions
About this article
Cite this article
Mo, S., Zeng, J. & Xu, W. Attractive and Repulsive Fully Informed Particle Swarm Optimization based on the modified Fitness Model. Soft Comput 20, 863–884 (2016). https://doi.org/10.1007/s00500-014-1546-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-014-1546-8