Skip to main content
Log in

Group mosquito host-seeking algorithm

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The host-seeking behavior of mosquitoes is very interesting. This paper is motivated by the following general observation on mosquito groups and their host-seeking behavior in nature: (1) Mosquitoes’ behavior has possession of the parallelism, openness, local interactivity and self-organization. (2) Mosquito groups seek host very fast. (3) The host-seeking behavior is similar to the producer-scrounger process, which assumes that group members search either for “finding” (producer) or for “joining” (scrounger) opportunities. It stimulates us to extend a mosquito system model in nature to group mosquito host-seeking model (GMHSM) and algorithm (GMHSA) for intelligent computing. In this paper, we propose GMHS approach and show how to use it. By GMHSM, the TSP is transformed into the kinematics and dynamics of mosquito groups host-seeking process. The properties of GMHSM and GMHSA, including the correctness, convergence and stability, have been discussed in this paper. The GMHS approach has many advantages in terms of multiple objective optimization, large-scale distributed parallel optimization, effectiveness of problem-solving and suitability for complex environment. Via simulations, we test the GMHS approach and compare it with other state-of-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Shadbolt N. (2004) Nature-Inspired Computing. IEEE Intell Syst 19(1):1–2

    Article  MathSciNet  Google Scholar 

  2. Navlakha S, Bar-Joseph Z (2011) Algorithms in nature: the convergence of systems biology and computational thinking. Mol Syst Biol 7(11):546–549. (Nature Publishing Group)

    Google Scholar 

  3. Buluta I, Ashhab S, Nori F (2011) Natural and artificial atoms for quantum computation. Rep Prog Phys 74(10):104–107

    Article  Google Scholar 

  4. Machta J (2011) Natural complexity, computational complexity and depth. CHAOS 21(3):111–114

    Article  MathSciNet  MATH  Google Scholar 

  5. Hong L, Peng Z, Bin H (2015) A novel approach to task assignment in a cooperative multi-agent design system. Appl Intell 43(1):162–175

    Article  Google Scholar 

  6. Denning P (2007) Computing is a natural science. Commun ACM 50(7):13–18

    Article  Google Scholar 

  7. Li J, Chi Z, Wan D (2008) Parallel genetic algorithm based on fine-grained model with GPU-accelerated. Control and Decision 23(6):697–700

    Google Scholar 

  8. Li J, Hu X, Pang Z, Qian K (2009) A parallel ant colony optimization algorithm based on fine-grained model with GPU-accelerated. Control and Decision 14(8):1132–1136

    MathSciNet  MATH  Google Scholar 

  9. Acan A (2002) GAACO: A GA +ACO hybrid for faster and better search capability. In: Proceedings of International Workshop on Ant Algorithms, Brussels, Belgium, pp 15–26

  10. Li J, Zhang L, Liu L (2009) A parallel immune algorithm based on fine-grained model with GPU-acceleration. In: International Conference on Innovative Computing, Information and Control, pp 683–686

  11. Zhao J, Liu Q, Wang W, et al. (2011) A parallel immune algorithm for traveling salesman problem and its application on cold rolling scheduling. Inf Sci 181(7):1212–1223

    Article  Google Scholar 

  12. He S, Wu QH, Saunders JR (2009) Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans Evol Comput 13(5):973–990

    Article  Google Scholar 

  13. Weissing FJ (2011) Born leaders. Nature 6(474):288–289

    Article  Google Scholar 

  14. Harcourt JL, Ang TZ, Sweetman G, Johnstone RA, Manica A (2009) Social feedback and the emergence of leaders and followers. Curr Biol 19:248C252

  15. Marinakis Y, Marinaki M, Dounias G (2011) Honey bees mating optimization algorithm for the Euclidean traveling salesman problem[J]. Inf Sci 181(20):4684–4698

    Article  MathSciNet  Google Scholar 

  16. Marinakis Y, Migdalas A, Pardalos PM, Expanding neighborhood GRASP (2005) for the traveling salesman problem[J]. Comput Optim Appl 32(3):231–257

    Article  MathSciNet  MATH  Google Scholar 

  17. Applegate D, Cook W, Rohe A (2003) Chained Lin-Kernighan for large traveling salesman problems[J]. INFORMS J Comput 15(1):82–92

    Article  MathSciNet  MATH  Google Scholar 

  18. Nguyen HD, Yoshihara I, Yamamori K, et al. (2007) Implementation of an effective hybrid GA for large-scale traveling salesman problems[J]. IEEE Trans Syst Man Cybern Part B: Cybernetics 37(1):92–99

    Article  Google Scholar 

  19. Johnson DS, McGeoch LA (2007) Experimental analysis of heuristics for the STSP[M]//The traveling salesman problem and its variations. Springer, US, pp 369–443

    Book  Google Scholar 

  20. Helsgaun K (2000) An effective implementation of the Lin-Kernighan traveling salesman heuristic[J]. Eur J Oper Res 126(1):106–130

    Article  MathSciNet  MATH  Google Scholar 

  21. Johnson DS, McGeoch LA (1997) The traveling salesman problem: A case study in local optimization[J]. Local Search in Combinatorial Optimization 1:215–310

    MathSciNet  MATH  Google Scholar 

  22. Neto DM (1999) Efficient cluster compensation for Lin-Kernighan heuristics[D]. University of Toronto

  23. Zachariasen M, Dam M (1996) Tabu search on the geometric traveling salesman problem[M]//Meta-Heuristics. Springer, US, pp 571–587

    MATH  Google Scholar 

  24. Marinakis Y, Migdalas A, Pardalos PM (2005) Expanding neighborhood GRASP for the traveling salesman problem[J]. Comput Optim Appl 32(3):231–257

    Article  MathSciNet  MATH  Google Scholar 

  25. Bentley JJ (1992) Fast algorithms for geometric traveling salesman problems[J]. ORSA J Comput 4(4):387–411

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant No. 60905043, 61073107 and 61173048, the Innovation Program of Shanghai Municipal Education Commission, and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Feng.

Appendix

Appendix

Proof Proof of Theorem 1.

Firstly, denote the j-th terms in the equations of dc ij and dr ij by \(\left \langle {\frac {dc_{ij} (t)}{dt}} \right \rangle _{j} \) and \(\left \langle {\frac {dr_{ij} (t)}{dt}} \right \rangle _{j} \) respectively. Then each term in the update formula is calculated as follows.

$$\left\langle {du_{ij} (t)/dt} \right\rangle_{1}^{r} =\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\left\langle {\frac{dr_{ij} (t)}{dt}} \right\rangle_{1} =-\lambda_{1} \left[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\right]^{2} $$
$$\begin{array}{@{}rcl@{}}\left\langle {du_{ij} (t)/dt} \right\rangle_{2}^{r} &&=\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\left\langle {\frac{dr_{ij} (t)}{dt}} \right\rangle_{2} =-\lambda_{2} \frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\frac{\partial J(t)}{\partial r_{ij} (t)}\\[-2pt] && =-\lambda_{2} \frac{\partial J(t)}{\partial u_{ij} (t)}\left[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\right]^{2} \end{array} $$

Similarly, we can get

$$\begin{array}{@{}rcl@{}} \left\langle {du_{ij} (t)/dt} \right\rangle_{3}^{r} &&=-\lambda_{3} \frac{\partial P(t)}{\partial u_{ij} (t)}[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}\\[-2pt] \left\langle {du_{ij} (t)/dt} \right\rangle_{4}^{r} &&=-\lambda_{4} \frac{\partial Q(t)}{\partial u_{ij} (t)}[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}\\[-2pt] \left \langle du_{ij} (t)/dt \right \rangle_{5}^{r} &&\!=\!\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\left\langle \frac{dr_{ij} (t)}{dt} \right\rangle_{5} \!=\!\frac{\beta_{r} dr_{ij} (t-1)}{\partial r_{ij} (t)}\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)} \end{array} $$

We can also get each term of \(\left \langle {\frac {dc_{ij} (t)}{dt}} \right \rangle _{j} \). We thus obtain

$$\begin{array}{@{}rcl@{}} &&\sum\limits_{j=1}^{5} \left[ {\left\langle {du_{ij} (t)/dt} \right\rangle _{j}^{c} +\left\langle {du_{ij} (t)/dt} \right\rangle_{j}^{r}} \right] \\ &&{\kern15pt}=[-\lambda_{1} -\lambda_{2} \frac{\partial J(t)}{\partial u_{ij} (t)}-\lambda_{3} \frac{\partial P(t)}{\partial u_{ij} (t)}-\lambda_{4} \frac{\partial Q(t)}{\partial u_{ij} (t)}] \cdot \{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}+[\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}]^{2}\}+\frac{\beta_{r} dr_{ij} (t-1)}{\partial r_{ij} (t)}\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}\\ &&{\kern30pt}+\frac{\beta_{c} dc_{ij} (t-1)}{\partial c_{ij} (t)}\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}\\ &&{\kern15pt}=\psi_{2} (t) \end{array} $$

Therefore, updating the weights c ij and grayscale values r ij by equations in Section 4 amounts to changing the speed of artificial mosquitoes ψ 2(t).

Proof Proof of Theorem 2.

Update the first and second terms by equations in Section 4 as follows.

$$\begin{array}{@{}rcl@{}} \left\langle {du_{ij} (t)/dt} \right\rangle_{1}^{r} +\left\langle {du_{ij}(t)/dt} \right\rangle_{2}^{r} +\left\langle {du_{ij} (t)/dt} \right\rangle_{1}^{c} &&+\left\langle {du_{ij} (t)/dt} \right\rangle_{2}^{c} \\ &&=(\lambda_{1} +\lambda_{2} \frac{\partial J(t)}{\partial u_{ij} (t)})\cdot \{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2} \\ &&= (\lambda_{1} +\lambda_{2} )x_{ij}^{2} (t)[r_{ij}^{2} (t)+c_{ij}^{2} (t)][-u_{ij} (t)]^{2}\\ &&\ge 0 \end{array} $$

So it can be seen that the personal utility of the artificial mosquito increases according to the first and second terms.

Proof Proof of Theorem 3.

Supposing that \(H(t)=\underset {i,j}{\max } \{-u_{ij}^{2} (t)\}\), we have

$$\begin{array}{@{}rcl@{}} [\exp (H(t)/2\varepsilon^{2})]^{2\varepsilon^{2}}&&\le \left[\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{n} \exp (-u_{ij}^{2}(t))/2\varepsilon^{2})\right]^{2\varepsilon^{2}}\\ &&\le [nn\exp (H(t)/2\varepsilon^{2})]^{2\varepsilon^{2}} \end{array} $$

Taking the logarithm of both ides of the above inequalities gives

$$H(t)\!\le\! 2\varepsilon^{2}\ln \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{n} \exp (-u_{ij}^{2} (t))/2\varepsilon^{2}) \le H(t)+2\varepsilon^{2}\ln nn $$

Since nn is constant and ε is very small, we have

$$H(t)\!\approx\! 2\varepsilon^{2}\ln \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{n} \exp (-u_{ij}^{2} (t))/2\varepsilon^{2}) -2\varepsilon^{2}\ln nn\!=\!2P(t) $$

Hence the decrease of the attraction function P(t) will reduce \(\underset {i,j}{\max } \{-u_{ij}^{2} (t)\}\), that is, increasing \(\underset {i,j}{\min } \{u_{ij}^{2} (t)\}\) which means the increase of the minimal utility of artificial mosquitoes.

Proof Proof of Theorem 4.

Update the third terms by equations in Section 4 as follows.

$$\begin{array}{@{}rcl@{}} \left\langle {\frac{du_{ij} (t)}{dt}} \right\rangle_{3} &&=\left\langle {du_{ij} (t)/dt} \right\rangle_{3}^{r} +\left\langle {du_{ij} (t)/dt} \right\rangle_{3}^{c}\\ &&=-\lambda_{3} \frac{\partial P(t)}{\partial u_{ij} (t)}\{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}+[\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}]^{2}\} \end{array} $$

Here, denote by \(\left \langle {\frac {dP(t)}{dt}} \right \rangle \) the differentiation of the attraction function P(t).

$$\begin{array}{@{}rcl@{}} \left\langle {\frac{dP(t)}{dt}} \right\rangle &&=\frac{\partial P(t)}{\partial u_{ij} (t)}\left\langle {\frac{du_{ij} (t)}{dt}} \right\rangle_{3}\\ &&=-\lambda_{3} \left[ {\frac{\partial P(t)}{\partial u_{ij} (t)}} \right]^{2}\{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}+[\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}]^{2}\} \\ &&=-\lambda_{3} \omega_{ij}^{2} (t)u_{ij}^{2} (t)x_{ij}^{2} (t)[r_{ij}^{2} (t)+c_{ij}^{2} (t)][-u_{ij}^{2} (t)]^{2}\le 0 \end{array} $$

where

$$\omega_{ij} (t)=\exp [-u_{ij}^{2} (t)/2\varepsilon ^{2}]/\sum\limits_{i=1}^{n} {\sum\limits_{j=1}^{n} {\exp (-u_{ij}^{2} (t))/2\varepsilon^{2})}} $$

Therefore, P(t) is decreasing. When P(t) decreases, we can increase the minimal utility of artificial mosquitoes.

Proof Proof of Theorem 5.

Denote the whole utility of all artificial mosquitoes by J(t). Reference the proofs in Theorem 2 and 4, we have

$$\begin{array}{@{}rcl@{}} \left\langle {\frac{dJ(t)}{dt}} \right\rangle &&=\frac{\partial J(t)}{\partial u_{ij} (t)}\left\langle {\frac{du_{ij} (t)}{dt}}\right\rangle_{2}\\ &&=\lambda_{2} \left[ {\frac{\partial J(t)}{\partial u_{ij} (t)}} \right]^{2}\{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}+[\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}]^{2}\}\ge 0 \\ \end{array} $$

Therefore, J(t) increases.

Proof Proof of Theorem 6.

According to the proving process in above theorems, we have

$$\begin{array}{@{}rcl@{}} \left\langle {\frac{dQ(t)}{dt}} \right\rangle &&=\frac{\partial Q(t)}{\partial u_{ij} (t)}\left\langle {\frac{du_{ij} (t)}{dt}}\right\rangle_{4}\\ &&=-\lambda_{4} \left[ {\frac{\partial Q(t)}{\partial u_{ij} (t)}} \right]^{2}\{[\frac{\partial u_{ij} (t)}{\partial r_{ij} (t)}]^{2}\!+\![\frac{\partial u_{ij} (t)}{\partial c_{ij} (t)}]^{2}\}\!\le\! 0 \\ \end{array} $$

Therefore, Q(t) decreases.

Proof Proof of Theorem 7.

Calculate with the other two β. Compare with the value generated by the original direction respectively. As x ij (t) is the same, we can ignore in the calculations. The length of the whole path is calculated as follows.

$$Z=\sum\limits_{i,j} {d_{ij} \cdot r_{ij}} $$

where d ij are fixed values. Three different β values will differently impact the grayscale values r ij .

$$\begin{array}{@{}rcl@{}} Z_{\beta_{0}} -Z_{\beta_{1}} &&=\sum\limits_{i,j} {d_{ij} \cdot r_{ij}} -\sum\limits_{i,j} {d_{ij} \cdot [r_{ij} +{\beta_{1}^{r}} dr_{ij} (t-1)]}\\ &&=-\sum\limits_{i,j} {d_{ij} \cdot {\beta_{1}^{r}} dr_{ij} (t-1)}\\ &&=-{\beta_{1}^{r}}\sum\limits_{i,j} {d_{ij} \cdot dr_{ij} (t-1)} \\ \end{array} $$

\({\beta _{1}^{r}} \) can be gotten by calculations at each iteration. Therefore, \(Z_{\beta _{0}} -Z_{\beta _{1}} \) is mainly influenced by dr ij (t−1), which may be positive or negative.

Similarly, calculating with \({\beta _{2}^{r}} \) can get similar results. The length calculated can be bigger or smaller than the original one. So we can employ three searching manners and select the best one.

Proof Proof of Theorem 8.

Firstly, the producer group scans three points and compare with its current location and then select the optimal point. So, we can get f(x p, k )≤f(x p, k−1), i.e., it meets f(D(x, ζ))≤f(x). Secondly, at this iteration, we will select the producer group of the next generation, i.e., the output of this iteration is \(f({x_{p,k}}) = \underset {1 \le i \le n}{\min } [f({x_{i,k}})]\). This value is the minimum of all mosquito groups at this iteration. x i, k is the set of all points scanned in the kth iteration, i.e., it meets f(D(x, ζ))≤f(ζ). According to the above analysis, the algorithm meets Hypothesis 1.

Proof Proof of Theorem 9.

To meet Hypothesis 2, the sample space of size S must contain S, i.e.

$$S \subseteq {\underset{i = 1}{\overset{t}{\cup}} } {M_{i,k}} $$

where M i, k is the supporting set of the sample space of the individual i at kth iteration. Dispersed members are generated randomly in the sample space. As a result, their supporting sets M i, k = S. So \({\underset {i = 1}{\overset {t}{\cup }} } {M_{i,k}} = S\). The size of their sample space is S. They can cover all points in the space. Define the Borel subset A of S as M i, k , and then we can get v(A)>0 and \({\mu _{k}}[A] = \sum\limits _{i = 1}^{t} {{\mu _{i,k}}[A] = 1} \). So GMHSA meets Hypothesis 2 and can global converge.

By Theorem 1 and Theorem 2, we analyzed the convergence of the system.

Proof Proof of Theorem 10.

In order to prove the formula proposed meeting the conditions, we calculate as follows.

$$\begin{array}{@{}rcl@{}} \frac{{d\delta} }{{dt}}&&= 2{\left( {\frac{t}{{t{\_}\max} }} \right)^{2}} \cdot \frac{1}{{t{\_}\max} } \cdot 0.5 \ge 0\\ \frac{{d\left( {\frac{{d\delta} }{{dt}}} \right)}}{{dt}}&&= 4\left( {\frac{t}{{t{\_}\max} }} \right) \cdot {\left( {\frac{1}{{t{\_}\max} }} \right)^{2}} \cdot 0.5 \end{array} $$

When t>0, \(\frac {{d\left ({\frac {{d\delta } }{{dt}}} \right )}}{{dt}} > 0,\frac {{d\delta } }{{dt}}\) decreases. And when t < 0, \(\frac {{d\left ({\frac {{d\delta } }{{dt}}} \right )}}{{dt}} < 0 \), \(\frac {{d\delta } }{{dt}}\) increases. As a result, the value δ is monotonically increasing. It changes slowly near t = 0.

Suppose that the iteration number is large enough. When the producer group has reached the best value, t will gradually increase. Finally, when t>t_max, δ will keep the value 1.

Similarly, the value t of the scrounger groups will decrease. When t>t_max, δ will keep the value 0.

Proof Proof of Theorem 11.

After the producer group has found the minimal value, the system will maintain stable for a certain period of time. In the certain period of time, the producer group will be leader continuously. When the time is long enough, it can be seen from Theorem 10 that the leadership of the producer group δ p →1. Meanwhile, the leadership of the scrounger groups δ p →0. Therefore, l 1→0, l 2→1.

Then the movement equation of scrounger groups can be simplified as follows.

$$\begin{array}{@{}rcl@{}} d{c_{ij}}(t)/dt &&= rand() \cdot [c_{ij}^{p}(t) - {c_{ij}}(t)] \\ {c_{ij}}(t) &&= {c_{ij}}(t - 1) + d{c_{ij}}(t)/dt\\ d{r_{ij}}(t)/dt&& = rand() \cdot [r_{ij}^{p}(t) - {r_{ij}}(t)] \\ {c_{ij}}(t) &&= c_{ij} (t - 1 ) + d{c_{ij}}(t)/dt \end{array} $$

Here, we simplify the problem to a one-dimensional problem only with consideration of the grayscale values. Set the random variable as a. Then the formula is simplified as follows.

$$r(k + 1) = r(k) + {\mathrm{a}} \times ({r_{p}} - r(k)) $$

Transforming it by Z-transformation, we can get the characteristic equation as follows.

$$z - 1 + {\mathrm{a}} = 0 $$

Then, make \(z = \frac {{w + 1}}{{w - 1}}\). Stable conditions of the linear system are that the coefficient is bigger than zero and the first column of Rolls criterion is bigger than zero. Therefore, the stable conditions can be obtained as follows.

$$0 < {a} < 2 $$

As a=rand(),0<a<1 meeting the condition, it can be known that:

$$r(k) = \underset{z \to 1}{\lim} \left( {(z - 1) \cdot R(z)} \right) = {r_{p}} $$

For weights c, the calculation process is similar. So, the scrounger groups will tend to the producer group and make the algorithm locally converge.

Proof Proof of Theorem 12.

  1. 1.

    Based on Lyapunov second theorem on stability, we have

    $$ \lambda_{1}+\lambda_{2}\frac{\partial J(t)}{\partial u_{ij}(t)}+\lambda_{3}\frac{\partial P(t)}{\partial u_{ij}(t)}+ \lambda_{4}\frac{\partial Q(t)}{\partial u_{ij}(t)}<0 $$
    (20)

    It is straightforward from (20).

  2. 2.

    In (20), where

    $$\begin{array}{@{}rcl@{}}\frac{\partial J(t)}{\partial u_{ij}(t)}&&=1;\\ \frac{\partial P(t)}{\partial u_{ij}(t)}&&=-u_{ij}\cdot \frac{\exp[-(u_{ij})^{2}/2\epsilon^{2}]}{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\exp[-(u_{ij})^{2}/2\epsilon^{2}]}<0;\\ \frac{\partial Q(t)}{\partial u_{ij}(t)}&&=-[\frac{1}{1+\exp(-10 u_{ij})}-\frac{1}{2}]<0. \end{array} $$

    Putting the positive items of (20) on the left side of “ <” and the negative items on the right side, (20) becomes

    $$ \lambda_{1}+\lambda_{2}< -u_{ij}\cdot \frac{\exp[-(u_{ij})^{2}/2\epsilon^{2}]}{\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}\exp[-(u_{ij})^{2}/2\epsilon^{2}]}\cdot \lambda_{3} -\frac{\partial Q(t)}{\partial u_{ij}(t)} \cdot \lambda_{4} $$
    (21)

    In (21), since u ij ∈[0,1] and 0<𝜖 < 1, then \(u_{ij}\cdot \frac {\exp [-(u_{ij})^{2}/2\epsilon ^{2}]}{\sum\limits _{i=1}^{n}\sum\limits _{j=1}^{n}\exp [-(u_{ij})^{2}/2\epsilon ^{2}]} \approx \frac {1}{n\times n}\).

    Usually n is large because GMHSA is good at large scale problems. Thus \(\frac {1}{n\times n}\) is very small.

    \(\frac {1}{n\times n}\) is the coefficient of λ 3 in (21), and therefore λ 3 will hardly influence the convergence of the GMHS algorithm.

  3. 3.

    Based on conclusion 2, (21) will approximately be

    $$ \lambda_{1}+\lambda_{2} < -\frac{\partial Q(t)}{\partial u_{ij}(t)} \lambda_{4}. $$
    (22)

    Where, \(-\frac {\partial Q(t)}{\partial u_{ij}(t)} =\frac {1}{1+\exp (-10 u_{ij})}-\frac {1}{2}\approx 0.4933\); Therefore, (22) becomes

    $$ \lambda_{1}+\lambda_{2} < 0.4933\cdot \lambda_{4}. $$
    (23)

For example, if λ 1+λ 2≤0.45 and λ 4≥0.9, then the above condition (21) will be satisfied and GMHSA will converge to a stable equilibrium state.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, X., Liu, X. & Yu, H. Group mosquito host-seeking algorithm. Appl Intell 44, 665–686 (2016). https://doi.org/10.1007/s10489-015-0718-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-015-0718-2

Keywords

Navigation