Skip to main content
Log in

Multi-population Based Univariate Marginal Distribution Algorithm for Dynamic Optimization Problems

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Many real-world problems are dynamic optimization problems in which the optimal solutions need to be continuously tracked over time. In this paper a multi-population based univariate marginal distribution algorithm (MUMDA) is proposed to solve dynamic optimization problems. The main idea of the algorithm is to construct several probability models by dividing the population into several parts. The objective is to divide the search space into several regions to maintain the diversity. Concretely, MUMDA uses one probability vector to do the search in the promising areas identified previously, and uses other probability vectors to search for new promising optimal solutions. Moreover the convergence of univariate marginal distribution algorithm (UMDA) is proved, which can be used to analyze the validity of the proposed algorithm. Finally, the experimental study was carried out to compare the performance of several UMDA, and the results show that the MUMDA is effective and can be well adaptive to the dynamic environments rapidly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments-a survey. IEEE Trans. Evol. Comput. 9(3), 1–15 (2005)

    Article  Google Scholar 

  2. Cartwright, H.M., Tuson, A.L.: Genetic algorithms and flowshop scheduling: towards the development of a real-time process control system. In: Proc. of the AISB Workshop on Evolutionary Computing, pp. 277–290. Morgan Kaufmann Publishers, San Francisco (1994)

    Google Scholar 

  3. Cobb, H.G.: An Investigation into the Use of Hypermutation as an Adaptive Operator in Genetic Algorithms Having Continuous, Time-Dependent Nonstationary Environment. Naval Research Laboratory, Washington (1990)

    Google Scholar 

  4. Grefenstette, J.J.: Genetic algorithms for changing environments. In: Parallel Problem Solving from Nature, Brussels, pp. 137–144 (1992)

  5. Goldberg, D.E., Smith, R.E.: Nonstationary function optimization using genetic algorithms with dominance and diploidy. In: Proceeding of the 2nd International Conference on Genetic Algorithms, pp. 59–68. Lawrence Erlbaum Associates (1987)

  6. Branke, J.: Memory enhanced evolutionary algorithms for changing optimization problems. In: Proc. of the Congress on Evolutionary Computation, pp. 1875–1882. IEEE Service Center, Piscataway (1999)

  7. Oppacher, F., Wineberg, M.: The shifting balance genetic algorithm: improving the GA in a dynamic environment. In: Proceeding of Genetic and Evolutionary Computation, pp. 504–510. Morgan Kaufmann, San Francisco (1999)

    Google Scholar 

  8. Branke, J., Kaubler, T., Schmidt, C., et al.: A Multi-population Approach to Dynamic Optimization Problems, Adaptive Computing in Design and Manufacturing, pp. 299–308. Springer, Berlin (2000)

    Google Scholar 

  9. Ursem, R.K.: Multinational GA optimization technique in dynamic environments. In: Proceeding of Genetic and Evolutionary Computation, pp. 19–26. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  10. Larranaga, P., Lozano, J.A.: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer, Norwell (2001)

    Google Scholar 

  11. Mühlenbein, H., Zinchenko, L., Kureichik, V., Mahnig, T.: Effective mutation rate for probabilistic models in evolutionary analog circuit design. In: Proceedings of the IEEE International Conference on Artificial Intelligence Systems, pp. 1987–1994 (2002)

  12. Tang, M., Raymond Lau, Y.K.: A Hybrid estimation of distribution algorithm for the minimal switching graph problem. In: Proceedings of the International Conference on Computational Intelligence for Modeling, Control and Automation, and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, pp. 259–264 (2005)

  13. Bosman, P.A.N., Thierens, D.: Expanding from discrete to continuous estimation of distribution algorithms: the IDEA. In: Lecture Notes in Computer Science 1917: Parallel Problem Solving from Nature-PPSN VI, pp. 767–776 (2000)

  14. Mühlenbein, H.: The equation for response to selection and its use for prediction. Evol. Comput. 5(3), 303–346 (1997)

    Article  Google Scholar 

  15. Zhang, Q., Mühlenbein, H.: On the convergence of a class of estimation of distribution algorithms. IEEE Trans. Evol. Comput. 8(2), 127–136 (2004)

    Article  Google Scholar 

  16. Yang, S., Yao, X.: Experimental study on population-based incremental learning algorithms for dynamic optimization problems. Soft Comput. 9(6), 815–834 (2005)

    Article  MATH  Google Scholar 

  17. Whitley, L.D.: Fundamental principles of deception in genetic search. Rawlins GJE Foundations of Genetic Algorithms 1(3), 221–241 (1991)

    MathSciNet  Google Scholar 

  18. Back, T., Khuri, S.: An evolutionary heuristic for the maximum independent set problem. In: IEEE Conference Evolutionary Computation, pp. 531–535 (1994)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Wu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wu, Y., Wang, Y. & Liu, X. Multi-population Based Univariate Marginal Distribution Algorithm for Dynamic Optimization Problems. J Intell Robot Syst 59, 127–144 (2010). https://doi.org/10.1007/s10846-009-9392-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-009-9392-0

Keywords

Navigation