Skip to main content
Log in

A multiple local search algorithm for continuous dynamic optimization

  • Published:
Journal of Heuristics Aims and scope Submit manuscript

Abstract

Many real-world optimization problems are dynamic (time dependent) and require an algorithm that is able to track continuously a changing optimum over time. In this paper, we propose a new algorithm for dynamic continuous optimization. The proposed algorithm is based on several coordinated local searches and on the archiving of the optima found by these local searches. This archive is used when the environment changes. The performance of the algorithm is analyzed on the Moving Peaks Benchmark and the Generalized Dynamic Benchmark Generator. Then, a comparison of its performance to the performance of competing dynamic optimization algorithms available in the literature is done. The obtained results show the efficiency of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. The strategy of DE consists in generating a new position for an individual, according to the differences calculated between other randomly selected individuals.

Abbreviations

\(\alpha \) :

Number of evaluations that makes a time span in the benchmarks

\(A_{i}\) :

Archive of the last \(n_{m}\) initial positions of agents that are created or relocated, in MLSDO

\(A_{m}\) :

Archive of the local optima found by the agents, in MLSDO

\(\Delta _{i}\) :

Size of the interval that defines the search space on the \(i{\mathrm{th}}\) axis in the “non-normalized” basis

\(\delta _{ph}\) :

Parameter of MLSDO that defines the highest precision parameter of the stagnation criterion of the agents local searches

\(\delta _{pl}\) :

Parameter of MLSDO that defines the lowest precision parameter of the stagnation criterion of the agents local searches

\(d\) :

Dimension of the search space

\(\mathbf{D}\) :

Direction vector of the preceding displacement of an agent, in MLSDO

\(\mathbf{D}^{\prime }\) :

Direction vector of the current displacement of an agent, in MLSDO

\(\mathbf{D}_{n}\) :

Direction vector of the displacement of an agent at the \(n{\mathrm{th}}\) step of its local search, in MLSDO

\(\mathbf{e}_{i}\) :

\(i{\mathrm{th}}\) unit vector of the “non-normalized” basis of the search space

\(f(\mathbf{x})\) :

The objective function of a static optimization problem

\(f(\mathbf{x}, t)\) :

The objective function of a dynamic optimization problem

\(f^{*}(t)\) :

Value of the best solution found at the \(t{\mathrm{th}}\) evaluation since the last change in the objective function

\(f_{i}(t)\) :

Value of the best solution found at the \(t{\mathrm{th}}\) evaluation of GDBG

\(f^{*}_{i}(t)\) :

Value of the global optimum at the \(t{\mathrm{th}}\) evaluation of GDBG

\(f^{*}_{j}\) :

Value of the global optimum for the \(j{\mathrm{th}}\) time span in the benchmarks

\(f^{*}_{ji}\) :

Value of the best solution found at the \(i{\mathrm{th}}\) evaluation of the \(j{\mathrm{th}}\) time span of MPB

\(\mathrm{F}_{k}\) :

\(k{\mathrm{th}}\) problem of GDBG

\(\bar{f}^{*}_{x}\) :

Average relative error of the best fitness found at the \(x{\mathrm{th}}\) evaluation of a time span of MPB

\(fitness\) :

Function that returns the value of the objective function of a given solution, in MLSDO

\(g_{k}(\mathbf{x}, t)\) :

The \(k{\mathrm{th}}\) inequality constraint of a dynamic optimization problem

\(h_{j}(\mathbf{x}, t)\) :

The \(j{\mathrm{th}}\) equality constraint of a dynamic optimization problem

\(isNotUpToDate\) :

Flag that indicates if a change in the objective function occurred since the detection of a given stored optimum

\(m\) :

Number of optima currently stored in the archive \(A_{m}\)

\(mark_{max}\) :

Maximal mark that can be obtained on the considered test case of GDBG

\(mark_{pct}\) :

Mark obtained on the considered test case of GDBG

\(max\) :

Function that returns the maximum value among several given values

\(min\) :

Function that returns the minimum value among several given values

\(N\) :

Number of agents currently existing during the execution of MLSDO

\(n_{a}\) :

Parameter of MLSDO that defines the maximum number of “exploring” agents

\(n_{c}\) :

Parameter of MLSDO that defines the maximum number of “tracking” agents created after the detection of a change

\(N_{c}\) :

Number of changes in the benchmarks

\(N_{e}(j)\) :

Evaluations performed on the \(j{\mathrm{th}}\) time span of MPB

\(n_{m}\) :

Capacity of the archives \(A_{i}\) and \(A_{m}\)

\(\mathbf{O}_\mathbf{c}\) :

A newly found optimum

\(oe\) :

Offline error used in MPB

\(op\) :

Overall performance used in GDBG

\(R\) :

Step size of an agent of MLSDO

\(r_{e}\) :

Parameter of MLSDO that defines the exclusion radius of the agents, and the initial step size of “exploring” agents

\(r_{i}(t)\) :

Relative error of the best fitness found at the \(t{\mathrm{th}}\) evaluation of the \(i{\mathrm{th}}\) run of GDBG

\(r_{l}\) :

Parameter of MLSDO that defines the initial step size of “tracking” agents

\(r_{new}\) :

Initial step size of an agent, in MLSDO (can be equal to either \(r_{e}\) or \(r_{l}\))

\(round\) :

Function that rounds a given number to the nearest integer

\(s\) :

Change severity used in MPB

\(\mathbf{S}_\mathbf{c}\) :

Current solution of an agent, in MLSDO

\(\mathbf{S^{\prime }}_\mathbf{c}\) :

Best candidate solution of an agent at the current step of its local search, in MLSDO

\(\mathbf{S}_\mathbf{prev}\) :

A candidate solution generated with \(\mathbf{S}_\mathbf{next}\) in the local search of an agent, in MLSDO

\(\mathbf{S}_\mathbf{new}\) :

The initial solution of the local search of an agent, in MLSDO

\(\mathbf{S}_\mathbf{next}\) :

A candidate solution generated with \(\mathbf{S}_\mathbf{prev}\) in the local search of an agent, in MLSDO

\(\mathbf{S}_\mathbf{w}\) :

Worst candidate solution of an agent at the current step of its local search, in MLSDO

\(t\) :

Number of evaluations performed since the beginning of the tested algorithm

\(\mathrm{T}_{k}\) :

\(k{\mathrm{th}}\) change scenario of GDBG

\(u\) :

Number of equality constraints

\(U\) :

Value of the cumulative dot product used to adapt the step size of an agent, in MLSDO

\(\mathbf{u}_{i}\) :

\(i{\mathrm{th}}\) vector of the “normalized” basis of the search space

\(U_{n}\) :

Value of the cumulative dot product of an agent at the \(n{\mathrm{th}}\) step of its local search, in MLSDO

\(v\) :

Number of inequality constraints

\(\mathbf{x}\) :

A solution in the search space of an optimization problem

\(x_{i}\) :

\(i{\mathrm{th}}\) coordinate of the solution vector \(\mathbf{x}\)

References

  • Bird, S., Li, X.: Using regression to improve local convergence. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 592–599. IEEE, Singapore (2007)

  • Blackwell, T., Branke, J.: Multi-Swarm Optimization in Dynamic Environments. Lecture Notes in Computer Science, vol. 3005, pp. 489–500 (2004)

  • Blackwell, T., Branke, J.: Multi-swarms, exclusion and anti-convergence in dynamic environments. IEEE Trans. Evol. Comput. 10(4), 459–472 (2006)

    Article  Google Scholar 

  • Branke, J., Kaußler, T., Schmidt, C., Schmeck, H.: A multi-population approach to dynamic optimization problems. In: Proceedings of Adaptive Computing in Design and Manufacturing, pp 299–308. Springer, Berlin (2000)

  • Branke, J.: Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1875–1882. IEEE, Washington (1999a)

  • Branke, J.: The Moving Peaks Benchmark Website (1999b). http://people.aifb.kit.edu/jbr/MovPeaks

  • Brest, J., Zamuda, A., Boskovic, B., Maucec, M.S., Zumer, V.: Dynamic optimization using self-adaptive differential evolution. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 415–422. IEEE, Trondheim (2009)

  • de França, F.O., Zuben, F.J.V.: A dynamic artificial immune algorithm applied to challenging benchmarking problems. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 423–430. IEEE, Trondheim, Norway (2009)

  • Dréo, J., Siarry, P.: An ant colony algorithm aimed at dynamic continuous optimization. Appl. Math. Comput. 181(1), 457–467 (2006)

    Google Scholar 

  • Du, W., Li, B.: Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inform. Sci. 178(15), 3096–3109 (2008)

    Article  Google Scholar 

  • Gardeux, V., Chelouah, R., Siarry, P., Glover, F.: Unidimensional search for solving continuous high-dimensional optimization problems. In: Proceedings of the IEEE International Conference on Intelligent Systems Design and Applications, pp 1096–1101. IEEE, Pisa (2009)

  • Gonzalez, J.R., Masegosa, A.D., Garcia, I.J.: A cooperative strategy for solving dynamic optimization problems. Memet. Comput. 3(1), 3–14 (2010)

    Article  Google Scholar 

  • Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments–a survey. IEEE Trans. Evol. Comput. 9(3), 303–317 (2005)

    Article  Google Scholar 

  • Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks IV, pp 1942–1948. IEEE, Perth (1995)

  • Korosec, P., Silc, J.: The differential ant-stigmergy algorithm applied to dynamic optimization problems. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 407–414. IEEE, Trondheim (2009)

  • Lepagnot, J., Nakib, A., Oulhadj, H., Siarry, P.: Performance analysis of MADO dynamic optimization algorithm. In: Proceedings of the IEEE International Conference on Intelligent Systems Design and Applications, pp 37–42. IEEE, Pisa (2009)

  • Lepagnot, J., Nakib, A., Oulhadj, H., Siarry, P.: A new multiagent algorithm for dynamic continuous optimization. Int. J. Appl. Metaheur. Comput. 1(1), 16–38 (2010)

    Article  Google Scholar 

  • Li, X., Branke, J., Blackwell, T.: Particle swarm with speciation and adaptation in a dynamic environment. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp 51–58. ACM, Seattle (2006)

  • Li, C., Yang, S.: A clustering particle swarm optimizer for dynamic optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 439–446. IEEE, Trondheim (2009)

  • Li, C., Yang, S.: A generalized approach to construct benchmark problems for dynamic optimization. In: Proceedings of the 7th International Conference on Simulated Evolution and Learning, pp 391–400. Springer, Melbourne (2008)

  • Li, X.: Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for multimodal function optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp 105–116. Springer, Seattle (2004)

  • Li, C., Yang, S., Nguyen, T.T., Yu, E.L., Yao, X., Jin, Y., Beyer, H.G., Suganthan, P.N.: Benchmark Generator for CEC 2009 Competition on Dynamic Optimization, Technical Report. University of Leicester, University of Birmingham, Nanyang Technological University (2008)

    Google Scholar 

  • Liu, L., Yang, S., Wang, D.: Particle swarm optimization with composite particles in dynamic environments. IEEE Trans. Syst Man Cybernet. B 40(6), 1634–1648 (2010)

    Article  Google Scholar 

  • Lung, R.I., Dumitrescu, D.: Collaborative evolutionary swarm optimization with a Gauss chaotic sequence generator. Innov. Hybrid Intell. Syst. 44, 207–214 (2007)

    Article  Google Scholar 

  • Lung, R.I., Dumitrescu, D.: ESCA: a new evolutionary-swarm cooperative algorithm. Stud. Comput. Intell. 129, 105–114 (2008)

    Article  Google Scholar 

  • Mendes, R., Mohais, A.: DynDE: A differential evolution for dynamic optimization problems. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 2808–2815. IEEE, Edinburgh (2005)

  • Moser, I., Hendtlass, T.: A simple and efficient multi-component algorithm for solving dynamic function optimisation problems. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 252–259. IEEE, Singapore (2007)

  • Moser, I., Chiong, R.: Dynamic function optimisation with hybridised extremal dynamics. Memetic Comput. 2(2), 137–148 (2010)

    Article  Google Scholar 

  • Novoa, P., Pelta, D.A., Cruz, C., del Amo, I.G.: Controlling particle trajectories in a multi-swarm approach for dynamic optimization problems. In: Proceedings of the International Work-conference on the Interplay between Natural and Artificial Computation, pp 285–294. Springer, Santiago de Compostela (2009)

  • Parrott, D., Li, X.: A particle swarm model for tracking multiple peaks in a dynamic environment using speciation. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 98–103. IEEE, San Diego (2004)

  • Parrott, D., Li, X.: Locating and tracking multiple dynamic optima by a particle swarm model using speciation. IEEE Trans. Evol. Comput. 10(4), 440–458 (2006)

    Article  Google Scholar 

  • Pelta, D., Cruz, C., Gonzalez, J.R.: A study on diversity and cooperation in a multiagent strategy for dynamic optimization problems. Int. J. Intell. Syst. 24(7), 844–861 (2009a)

    Article  MATH  Google Scholar 

  • Pelta, D., Cruz, C., Verdegay, J.L.: Simple control rules in a cooperative system for dynamic optimisation problems. Int. J. General Syst. 38(7), 701–717 (2009b)

    Article  MATH  Google Scholar 

  • Tfaili, W., Siarry, P.: A new charged ant colony algorithm for continuous dynamic optimization. Appl. Math. Comput. 197(2), 604–613 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Wang, H., Wang, D., Yang, S.: A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems. Soft Comput. 13(8–9), 763–780 (2009)

    Article  Google Scholar 

  • Yang, S., Yao, X.: Population-based incremental learning with associative memory for dynamic environments. IEEE Trans. Evol. Comput. 12(5), 542–562 (2008)

    Article  Google Scholar 

  • Yang, S., Li, C.: A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. IEEE Trans. Evol. Comput. 14(6), 959–974 (2010)

    Article  Google Scholar 

  • Yu, E.L., Suganthan, P.: Evolutionary programming with ensemble of external memories for dynamic optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation, pp 431–438. IEEE, Trondheim (2009)

  • Zeng, S., Shi, H., Kang, L., Ding, L.: Orthogonal dynamic hill climbing algorithm: ODHC. In: Yang, S., Ong, Y.S., Jin, Y. (eds.) Evolutionary Computation in Dynamic and Uncertain Environments, pp 79–104. Springer, New York (2007)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Siarry.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lepagnot, J., Nakib, A., Oulhadj, H. et al. A multiple local search algorithm for continuous dynamic optimization. J Heuristics 19, 35–76 (2013). https://doi.org/10.1007/s10732-013-9215-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10732-013-9215-0

Keywords