Skip to main content
Log in

An Island Model based on Stigmergy to solve optimization problems

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

Island Model (IM) is an alternative often used to parallel Evolutionary Algorithms (EA). In IM, the population is distributed between islands that evolve their solutions in parallel, connected by a topology. Periodically, solutions migrate between islands according to a migration policy. The IM can be seen as an ideal structure to combine different algorithms to be used in an organized and cooperative way to solve a problem. Motivated by the number and distinction of EAs proposed in the last decades, in terms of performance and evolutionary behavior, this work proposes a hybrid configuration for IM, called Stigmergy Island Model (Stgm-IM), inspired by the natural phenomenon of stigmergy. Stigmergy is present in groups of some social species, and, by it, their agents organize themselves and maintain a level of cooperation through indirect communication. The Stgm-IM was evaluated regarding its evolutionary behavior and its performance on a benchmark suite of fifteen optimization problems, showing expected results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Adapted from Dorigo et al. (2000)

Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  • Abdelhafez A, Alba E, Luque G (2019) Performance analysis of synchronous and asynchronous distributed genetic algorithms on multiprocessors. Swarm Evol Comput 49:147–157. https://doi.org/10.1016/j.swevo.2019.06.003

    Article  Google Scholar 

  • Alba E (2005) Parallel metaheuristics: a new class of algorithms. Wiley, Hoboken

    Book  Google Scholar 

  • Bessaou M, Pétrowski A, Siarry P (2000) Island model cooperating with speciation for multimodal optimization. In: Schoenauer M, Deb K, Rudolph G, Yao X, Lutton E, Merelo JJ, Schwefel HP (eds) Parallel problem solving from nature PPSN VI. Springer, Berlin, pp 437–446

    Chapter  Google Scholar 

  • Bilal PM, Zaheer H, Garcia-Hernandez L, Abraham A (2020) Differential evolution: a review of more than two decades of research. Eng Appl Artif Intell 90:103479

    Article  Google Scholar 

  • Bodenhofer U (2002) Genetic algorithms: theory and applications. Fuzzy Logic Laboratorium Linz, Hagenberg

    Google Scholar 

  • Bonabeau E (1998) Social insect colonies as complex adaptive systems. Ecosystems 1(5):437–443

    Article  Google Scholar 

  • Candan C, Goeffon A, Lardeux F, Saubion F (2012) A dynamic island model for adaptive operator selection. In: Proceedings of the 14th annual conference on genetic and evolutionary computation, GECCO’12, pp. 1253–1260. Association for Computing Machinery, New York, NY, USA

  • Cantú-Paz E (1998) A survey of parallel genetic algorithms. Calculateurs paralleles, reseaux et systems repartis 10(2):141–171

    Google Scholar 

  • Cantú-Paz E (2001) Migration policies, selection pressure, and parallel evolutionary algorithms. J Heuristics 7(4):311–334

    Article  Google Scholar 

  • Capriles PVSZ, Fonseca LG, Barbosa HJC, Lemonge ACC (2007) Rank-based ant colony algorithms for truss weight minimization with discrete variables. Commun Numer Methods Eng 23(6):553–575

    Article  MathSciNet  Google Scholar 

  • Cheong PY, Aggarwal D, Hanne T, Dornberger R (2017) Variation of ant colony optimization parameters for solving the travelling salesman problem. In: 2017 IEEE 4th international conference on soft computing machine intelligence (ISCMI), pp 60–65

  • Crainic TG, Toulouse M (2003) Parallel strategies for meta-heuristics. Springer, Boston, pp 475–513

    MATH  Google Scholar 

  • Dolan ED, More JJ (2002) Benchmarking optimization software with performance profiles. Math Progr 91(2):201–213

    Article  MathSciNet  Google Scholar 

  • Dorigo M, Bonabeau E, Theraulaz G (2000) Ant algorithms and stigmergy. Fut Gen Comput Syst 16(8):851–871

    Article  Google Scholar 

  • Dorigo M, Stützle T (2010) Ant colony optimization: overview and recent advances. Springer, Boston, pp 227–263

    Google Scholar 

  • Duarte G, Lemonge A, Goliatt L (2017) A dynamic migration policy to the island model. In: 2017 IEEE congress on evolutionary computation (CEC), pp 1135–1142

  • Duarte G, Lemonge A, Goliatt L (2018) A new strategy to evaluate the attractiveness in a dynamic island model. In: 2018 IEEE congress on evolutionary computation (CEC), pp 1–8

  • Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, MHS’95. IEEE, pp 39–43

  • Gaertner D, Clark K (2005) On optimal parameters for ant colony optimization algorithms. In: Proceedings of the international conference on artificial intelligence 2005. CSREA Press, pp 83–89

  • Guan W, Szeto KY (2013) Topological effects on the performance of island model of parallel genetic algorithm. In: Proceedings of the 12th international conference on artificial neural networks: advances in computational intelligence—volume part II, IWANN’13. Springer, Heidelberg, pp 11–19

  • Gustafson S, Burke EK (2006) The speciating island model: an alternative parallel evolutionary algorithm. J Parall Distrib Comput 66(8):1025–1036 (special issue: parallel bioinspired algorithms)

    Article  Google Scholar 

  • Holland JH (1973) Genetic algorithms and the optimal allocation of trials. SIAM J Comput 2(2):88–105

    Article  MathSciNet  Google Scholar 

  • Izzo D, Rucinski M, Ampatzis C (2009) Parallel global optimisation meta-heuristics using an asynchronous island-model. In: 2009 IEEE congress on evolutionary computation, pp 2301–2308

  • Jadaan OA, Rajamani L, Rao CR (2005) Improved selection operator for ga. J Theor Appl Inf Technol 4:269–277

    Google Scholar 

  • Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report TR06, Erciyes University, Engineering Faculty, Kayseri, Turkiye

  • Karaboga D, Basturk B (2007) Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems. In: Foundations of fuzzy logic and soft computing, vol 4529, pp 789–798. Springer, Berlin

  • Kurdi M (2015) A new hybrid island model genetic algorithm for job shop scheduling problem. Comput Ind Eng 88(Supplement C):273–283

    Article  Google Scholar 

  • Lardeux F, Maturana J, Rodriguez-Tello E, Saubion F (2019) Migration policies in dynamic island models. Nat Comput Int J 18(1):163–179. https://doi.org/10.1007/s11047-017-9660-z

    Article  MathSciNet  Google Scholar 

  • Li C, Yang S (2008) An island based hybrid evolutionary algorithm for optimization. Springer, Berlin, pp 180–189

    Google Scholar 

  • Liang JJ, Qu BY, Suganthan PN, Chen Q (2014) Problem definitions and evaluation criteria for the cec 2015 competition on learning-based real-parameter single objective optimization. Technical report, Nanyang Technological University

  • Lynn N, Ali MZ, Suganthan PN (2018) Population topologies for particle swarm optimization and differential evolution. Swarm Evol Comput 39:24–35

    Article  Google Scholar 

  • Ma H, Shen S, Yu M, Yang Z, Fei M, Zhou H (2019) Multi-population techniques in nature inspired optimization algorithms: a comprehensive survey. Swarm Evol Comput 44:365–387

    Article  Google Scholar 

  • Magalhaes TT, Krempser E, Barbosa HJC (2015) Migration policies to improve exploration in parallel island models for optimization via metaheuristics. In: Proceedings of the XXXVII Ibero-Latin American congress on computational methods in engineering, CILAMCE 2015

  • Meng Q, Wu J, Ellis J, Kennedy PJ (2017) Dynamic island model based on spectral clustering in genetic algorithm. In: 2017 international joint conference on neural networks (IJCNN), pp 1724–1731

  • Mezura-Montes E, Velázquez-Reyes J, Coello Coello CA (2006) A comparative study of differential evolution variants for global optimization. In: Proceedings of the 8th annual conference on genetic and evolutionary computation, GECCO’06, pp 485–492. ACM, New York

  • Parpinelli RS, Lopes HS (2012) An ecology-based heterogeneous approach for cooperative search. In: Barros LN, Finger M, Pozo AT, Gimenénez-Lugo GA, Castilho M (eds) Advances in artificial intelligence—SBIA 2012. Springer, Berlin, pp 212–221

    Chapter  Google Scholar 

  • Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization. Swarm Intell 1(1):33–57

    Article  Google Scholar 

  • Ruciński M, Izzo D, Biscani F (2010) On the impact of the migration topology on the island model. Parallel Comput 36(10–11):555–571 (parallel architectures and bioinspired algorithms)

    Article  Google Scholar 

  • Skolicki ZM (2007) An analysis of island models in evolutionary computation. Ph.D. thesis, Fairfax, VA, USA

  • Storn R, Price K (1997) Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359

    Article  MathSciNet  Google Scholar 

  • Ursem RK (2000) Multinational gas: multimodal optimization techniques in dynamic environments. In: Proceedings of the 2nd annual conference on genetic and evolutionary computation, GECCO’00, pp 19–26. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA

  • Yu JJ, Li VO (2015) A social spider algorithm for global optimization. Appl Soft Comput 30:614–627

    Article  Google Scholar 

  • Zhang Y, Wang S, Ji G (2015) A comprehensive survey on particle swarm optimization algorithm and its applications. Math Probl Eng 2015:1–38

Download references

Acknowledgements

The authors acknowledge the support of Graduate Program in Computational Modeling at Federal University of Juiz de Fora (UFJF) and the Brazilian funding agencies CNPq—Conselho Nacional de Desenvolvimento Científico e Tecnológico (Grant Numbers 306186/2017, 429639/2016 and 306069/2016), FAPEMIG (APQ-00334/18), and CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior), finance code 001. This research was developed with the support of the Núcleo Avançado de Computação de Alto Desempenho (NACAD) at COPPE, Federal University of Rio de Janeiro (UFRJ). The authors would like to thanks the reviewers for the corrections and suggestions, which helped improve the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grasiele Regina Duarte.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Some definitions (Liang et al. 2014):

  • All test functions are minimization problems given by the form

    $$\begin{aligned} \left\{ \begin{array}{ll} {\text {Minimize }}f({{\mathbf{x }}})\\ {\mathbf{x}} \in {\mathbb {R}}^D \end{array}, \right. \end{aligned}$$

    where \(f({{\mathbf{x }}})\) is the objective function, \({{\mathbf{x }}} \in [-100, 100]^D\) and \(D \in \{10, 30, 50, 100\}\) is the dimension of the problem.

  • \({\mathbf{o }}_i \in [-80, 80]^D\), such that \(i \in \{1, 2,\ldots , 15\}\), is the global optimal solution of the problem Fi displacement by rotation.

  • \({\mathbf{M }}_i\) is the rotation matrix of the problem Fi produced by the normal orthogonalization process by the method Gram-Schmidt in a random matrix.

  • \(Fi^* = Fi({{\mathbf{x }}}^*)\), where \(i \in \{1, 2,\ldots , 15\}\), \({\mathbf{x }}^*\) is the optimal solution of the problem Fi and \(Fi({\mathbf{x }}^*) = i \times 100\).

Definition of the basic functions:

  • \(f_1\)

    $$\begin{aligned} f_1({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{i = D}{(10^6)^{\frac{i - 1}{D - 1}}{\text {x}}_i^2} \end{aligned}$$
  • \(f_2\)

    $$\begin{aligned} f_2({\mathbf{x }}) = {\text {x}}_1^2 + 10^6 \displaystyle \sum _{i = 2}^{i = D}{{\text {x}}_i^2} \end{aligned}$$
  • \(f_3\)

    $$\begin{aligned} f_3({\mathbf{x }}) = 10^6{\text {x}}_1^2 + \displaystyle \sum _{i = 2}^{i = D}{{\text {x}}_i^2} \end{aligned}$$
  • \(f_4\)

    $$\begin{aligned} f_4({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{i = D - 1}{(100({\text {x}}_i^2 - {\text {x}}_{i + 1})^2 + ({\text {x}}_i -1)^2)} \end{aligned}$$
  • \(f_5\)

    $$\begin{aligned} f_5({\mathbf{x }}) = \displaystyle -20{\text {exp}}\left( -0.2\sqrt{\frac{1}{D}\sum _{i = 1}^{i = D}{{\text {x}}_i^2}}\right) - {\text {exp}}\left( \frac{1}{D}\sum _{i = 1}^{i + D}{{\text {cos}}(2\pi {\text {x}}_i)}\right) + 20 + e \end{aligned}$$
  • \(f_6\)

    $$\begin{aligned} f_6({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{i = D}{\left( \sum _{k = 0}^{k = 20}{[0.5^k {\text {cos}}(2 \pi 3^k({\text {x}}_i + 0.5))]}\right) } - D \sum _{k = 0}^{k = 20}{[0.5^k {\text {cos}}(2 \pi 3^k \times 0.5)]} \end{aligned}$$
  • \(f_7\)

    $$\begin{aligned} f_7({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{i = D}{\frac{{\text {x}}_i^2}{4000}} - \prod _{i = 1}^{i = D}{{\text {cos}}\left( \frac{{\text {x}}_i}{\sqrt{i}}\right) } + 1 \end{aligned}$$
  • \(f_8\)

    $$\begin{aligned} f_8({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{i = D}{({\text {x}}_i^2 - 10 {\text {cos}}(2 \pi {\text {x}}_i) - 10)} \end{aligned}$$
  • \(f_9\)

    $$\begin{aligned} f_9({\mathbf{x }}) = 418.9829 \times D - \displaystyle \sum _{i = 1}^{i = D}{g(z_i)}, \end{aligned}$$

    where \(z_i = {\text {x}}_i + 4.209687462275036\)e+002 and

    $$\begin{aligned} g(z_i) = \left\{ \begin{array}{lllllll} z_i {\text {sin}}(|z_i|^{\frac{1}{2}}), {\text { se }} |z_i| \le 500 \\ \\ \displaystyle (500 - {\text {mod}}(z_i, 500)) {\text {sin}}(\sqrt{|500 - {\text {mod}}(z_i, 500)|}) - \\ \displaystyle \frac{(z_i - 500)^2}{10{,}000D}, {\text { se }} z_i > 500 \\ \\ \displaystyle ({\text {mod}}(|z_i| - 500) - 500) {\text {sin}}(\sqrt{|{\text {mod}}(|z_i|, 500) - 500|}) - \\ \displaystyle \frac{(z_i + 500)^2}{10{,}000D}, {\text { se }} z_i < -500 \end{array} \right. \end{aligned}$$
  • \(f_{10}\)

    $$\begin{aligned} f_{10}({\mathbf{x }}) = \displaystyle \frac{10}{D^2} \prod _{i = 1}^{i = D}{\left( 1 + i \sum _{j = 1}^{j = 32}{\frac{|2^j {\text {x}}_i - {\text {round}}(2^j {\text {x}}_i)|}{2^j}}\right) ^{\frac{10}{D^{1.2}}}} - \frac{10}{D^2} \end{aligned}$$
  • \(f_{11}\)

    $$\begin{aligned} f_{11}({\mathbf{x }}) = \displaystyle \left| \sum _{i = 1}^{i = D}{{\text {x}}_i^2 - D}\right| ^{\frac{1}{4}} + \frac{\left( 0.5 \displaystyle \sum \nolimits _{i = 1}^{i = D}{{\text {x}}_i^2} + \sum \nolimits _{i = 1}^{i = D}{{\text {x}}_i}\right) }{D} + 0.5 \end{aligned}$$
  • \(f_{12}\)

    $$\begin{aligned} f_{12}({\mathbf{x }}) = \displaystyle \left| \left( \sum _{i = 1}^{i = D}{{\text {x}}_i^2}\right) ^2 - \left( \sum _{i = 1}^{i = D}{{\text {x}}_i}\right) ^2\right| ^{\frac{1}{2}} + \displaystyle \frac{\left( 0.5 \displaystyle \sum \nolimits _{i = 1}^{i = D}{{\text {x}}_i^2} + \sum \nolimits _{i = 1}^{i = D}{{\text {x}}_i}\right) }{D} + 0.5 \end{aligned}$$
  • \(f_{13}\)

    $$\begin{aligned} f_{13}({\mathbf{x }}) = f_7(f_4({\text {x}}_1, {\text {x}}_2)) + f_7(f_4({\text {x}}_2, {\text {x}}_3)) + \cdots + f_7(f_4({\text {x}}_{D - 1}, {\text {x}}_D)) + f_7(f_4({\text {x}}_D, {\text {x}}_1)) \end{aligned}$$
  • \(f_{14}\)

    $$\begin{aligned} f_{14}({\mathbf{x }}) = g({\text {x}}_1, {\text {x}}_2) + g({\text {x}}_2, {\text {x}}_3) + \cdots + g({\text {x}}_{D - 1}, {\text {x}}_D) + g({\text {x}}_D, {\text {x}}_1) \end{aligned}$$

    where

    $$\begin{aligned} g({\text {y}}, {\text {z}}) = 0.5 + \displaystyle \frac{(z{sin}^2(\sqrt{{\text {y}}^2 + {\text {z}}^2}) - 0.5)}{(1 + 0.001({\text {y}}^2 + {\text {z}}^2))^2} \end{aligned}$$

Definition of the test functions:

  • Unimodal Functions:

    • F1

      $$\begin{aligned} F1({\mathbf{x }}) = f_1({\mathbf{M }}_1({\mathbf{x }} - {\mathbf{o }}_1)) + F1^* \end{aligned}$$
    • F2

      $$\begin{aligned} F2({\mathbf{x }}) = f_2({\mathbf{M }}_2({\mathbf{x }} - {\mathbf{o }}_2)) + F2^* \end{aligned}$$

Simple Multimodal Functions:

  • F3

    $$\begin{aligned} F3({\mathbf{x }}) = f_5({\mathbf{M }}_3({\mathbf{x }} - {\mathbf{o }}_3)) + F3^* \end{aligned}$$
  • F4

    $$\begin{aligned} F4({\mathbf{x }}) = f_8\left( {\mathbf{M }}_4\left( \displaystyle \frac{5.12({\mathbf{x }} - {\mathbf{o }}_4)}{100}\right) \right) + F4^* \end{aligned}$$
  • F5

    $$\begin{aligned} F5({\mathbf{x }}) = f_9\left( {\mathbf{M }}_5\left( \displaystyle \frac{1000({\mathbf{x }} - {\mathbf{o }}_5)}{100}\right) \right) + F5^* \end{aligned}$$

Hybrid Functions:

  • For such functions consider:

    $$\begin{aligned} FH({\mathbf{x }}) = g_1({\mathbf{M }}_1{\mathbf{z }}_1) + g_2({\mathbf{M }}_2{\mathbf{z }}_2) + \cdots + g_N({\mathbf{M }}_N{\mathbf{z }}_N) + F^*({\mathbf{x }}), \end{aligned}$$

    where

    • \(FH({\mathbf{x }})\), such that \(H \in \{6, 7, 8\}\) is a Hybrid Function.

    • \(g_i({\mathbf{x }})\) is the ith basic function involved in a Hybrid Function.

    • N is the number of basic functions.

    • \({\mathbf{z }} = [{\mathbf{z }}_1, {\mathbf{z }}_2,\ldots , {\mathbf{z }}_N]\)

    • \({\mathbf{z }}_1 = [{\mathbf{y }}_{S_1}, \mathbf{y} _{S_2},\ldots , {\mathbf{y }}_{S_{n_1}}]\), \({\mathbf{z }}_2 = [{\mathbf{y }}_{S_{n_1 + 1}}, {\mathbf{y }}_{S_{n_1 + 2}},\ldots , {\mathbf{y }}_{S_{n_1 + n_2}}]\),

    • \({\mathbf{z }}_N = [{\mathbf{y }}_{S_{\sum _{i = 1}^{N - 1}{n_i + 1}}}, {\mathbf{y }}_{S_{\sum _{i = 1}^{N - 1}{n_i + 2}}},\ldots , {\mathbf{y }}_{S_{D}}]\)

    • \({\mathbf{y }} = {\mathbf{x }} - {\mathbf{o }}_i\)

    • \(S = {\text {permutacao}}(1:D)\)

    • \(p_i\) is used to control the percentage of \(g_i({\mathbf{x }})\)

    • \(n_i\) is the dimension for each basic function. \(\sum _{i = 1}^{i = N}{n_i} = D\)

    • \(n_1 = \lceil p_1 D \rceil\), \(n_2 = \lceil p_2 D \rceil\),..., \(n_{N - 1} = \lceil p_{N - 1} D \rceil\), \(n_{N} = D - \displaystyle \sum _{i = 1}^{i = N - 1}{n_i}\)

  • F6

    • \(N = 3\)

    • \(p = [0.3, 0.3, 0.4]\)

    • \(g_1 = f_9\)

    • \(g_2 = f_8\)

    • \(g_3 = f_1\)

  • F7

    • \(N = 4\)

    • \(p = [0.2, 0.2, 0.3, 0.3]\)

    • \(g_1 = f_7\)

    • \(g_2 = f_6\)

    • \(g_3 = f_4\)

    • \(g_4 = f_{14}\)

  • F8

    • \(N = 5\)

    • \(p = [0.1, 0.2, 0.2, 0.2, 0.3]\)

    • \(g_1 = f_{14}\)

    • \(g_2 = f_{12}\)

    • \(g_3 = f_4\)

    • \(g_4 = f_9\)

    • \(g_5 = f_1\)

Composed Functions:

  • For such functions consider:

    $$\begin{aligned} FC({\mathbf{x }}) = \displaystyle \sum _{i = 1}^{N}{\{\omega _i[\lambda _i g_i({\mathbf{x }}) + bias_i]\} + F^*}, \end{aligned}$$

    where

    • \(FC({\mathbf{x }})\), such that \(C \in \{9, 10, 11, 12, 13, 14, 15\}\) is a Composed Function.

    • N is the number of basic functions.

    • \(g_i({\mathbf{x }})\) is the ith basic function involved in a Composed Function.

    • \(\lambda _i\) is used to control the height of each \(g_i({\mathbf{x }})\).

    • \(bias_i\) define which optimal is the global optimal.

    • \(\omega _i\) is given by

      $$\begin{aligned} \omega _i = \frac{{\text {w}}_i}{\sum \nolimits _{i = 1}^{n}{{\text {w}}_i}} \end{aligned}$$

      where \({\text {w}}_i\) is given by

      $$\begin{aligned} {\text {w}}_i = \frac{1}{\sqrt{\sum \nolimits _{j = 1}^{j = D}{({\text {x}}_j - {\text {o}}_{ij})^2}}} {\text {exp}} \left( -\frac{\sum \nolimits _{j = 1}^{j = D}{({\text {x}}_j - {\text {o}}_{ij})^2}}{2 D \sigma _i^2}\right) , \end{aligned}$$

      where

      • \({\mathbf{o }}_i\) is a new optimal solution defined for each \(g_i({\mathbf{x }})\).

      • \(\sigma _i\) is used to control the range of each \(g_i({\mathbf{x }})\).

  • F9

    • \(N = 3\)

    • \(\sigma = [20, 20, 20]\)

    • \(\lambda = [1, 1, 1]\)

    • \(bias = [0, 100, 200]+F9^*\)

    • \(g_1 = F5\)

    • \(g_2 = F4\)

    • \(g_3 = f_{12}\)

  • F10

    • \(N = 3\)

    • \(\sigma = [10, 30, 50]\)

    • \(\lambda = [1, 1, 1]\)

    • \(bias = [0, 100, 200]+F{10}^*\)

    • \(g_1 = F6\)

    • \(g_2 = F7\)

    • \(g_3 = F8\)

  • F11

    • \(N = 5\)

    • \(\sigma = [10, 10, 10, 20, 20]\)

    • \(\lambda = [10, 10, 2.5, 25, 1\)e−6]

    • \(bias = [0, 100, 200, 300, 400]+F{11}^*\)

    • \(g_1 = f_{12}\)

    • \(g_2 = F4\)

    • \(g_3 = F5\)

    • \(g_4 = f_6\)

    • \(g_5 = F1\)

  • F12

    • \(N = 5\)

    • \(\sigma = [10, 20, 20, 30, 30]\)

    • \(\lambda = [0.25, 1, 1\)e−7, 10, 10]

    • \(bias = [0, 100, 100, 200, 200]+F{12}^*\)

    • \(g_1 = F5\)

    • \(g_2 = F4\)

    • \(g_3 = F1\)

    • \(g_4 = f_{14}\)

    • \(g_5 = f_{11}\)

  • F13

    • \(N = 5\)

    • \(\sigma = [10, 10, 10, 20, 20]\)

    • \(\lambda = [1, 10, 1, 25, 10]\)

    • \(bias = [0, 100, 200, 300, 400]+F{13}^*\)

    • \(g_1 = F8\)

    • \(g_2 = F4\)

    • \(g_3 = F6\)

    • \(g_4 = F5\)

    • \(g_5 = f_{14}\)

  • F14

    • \(N = 7\)

    • \(\sigma = [10, 20, 30, 40, 50, 50, 50]\)

    • \(\lambda = [10, 2.5, 2.5, 10, 1\)e−6, 1e−6, 10]

    • \(bias = [0, 100, 200, 300, 300, 400, 400]+F{14}^*\)

    • \(g_1 = f_{11}\)

    • \(g_2 = f_{13}\)

    • \(g_3 = F5\)

    • \(g_4 = f_{14}\)

    • \(g_5 = F1\)

    • \(g_6 = F2\)

    • \(g_7 = F4\)

  • F15

    • \(N = 10\)

    • \(\sigma = [10, 10, 20, 20, 30, 30, 40, 40, 50, 50]\)

    • \(\lambda = [0.1, 2.5\)e−1, 0.1, 2.5e−2, 1e−3, 0.1, 1e−5, 10, 2.5e−2, 1e−3]

    • \(bias = [0, 100, 100, 200, 200, 300, 300, 400, 400, 500]+F{15}^*\)

    • \(g_1 = F4\)

    • \(g_2 = f_{6}\)

    • \(g_3 = f_{11}\)

    • \(g_4 = F5\)

    • \(g_5 = f_4\)

    • \(g_6 = f_{12}\)

    • \(g_7 = f_{10}\)

    • \(g_8 = f_{14}\)

    • \(g_9 = f_{13}\)

    • \(g_{10} = F3\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duarte, G.R., de Castro Lemonge, A.C., da Fonseca, L.G. et al. An Island Model based on Stigmergy to solve optimization problems. Nat Comput 20, 413–441 (2021). https://doi.org/10.1007/s11047-020-09819-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-020-09819-x

Keywords

Navigation