Skip to main content

Learning Structure Illuminates Black Boxes – An Introduction to Estimation of Distribution Algorithms

  • Chapter
Advances in Metaheuristics for Hard Optimization

Part of the book series: Natural Computing Series ((NCS))

Abstract

This chapter serves as an introduction to estimation of distribution algorithms (EDAs). Estimation of distribution algorithms are a new paradigm in evolutionary computation. They combine statistical learning with population-based search in order to automatically identify and exploit certain structural properties of optimization problems. State-of-the-art EDAs consistently outperformclassical genetic algorithms on a broad range of hard optimization problems.We review fundamental terms, concepts, and algorithms which facilitate the understanding of EDA research. The focus is on EDAs for combinatorial and continuous non-linear optimization and the major differences between the two fields are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. H. Ackley. A Connectionist Machine for Genetic Hillclimbing. Kluwer Academic, Boston, 1987

    Google Scholar 

  2. T. W. Anderson. An Introduction to Multivariate Statistical Analysis. John Wiley & Sons Inc., New York, 1958

    MATH  Google Scholar 

  3. S. Andradottir. Handbook of Simulation: Principles, Methodology, Advances, Applications, and Practice, Chapter 9, Simulation Optimization. John Wiley, New York, 1998

    Google Scholar 

  4. S. Baluja. Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning. Technical Report CMU-CS-94-163, Carnegie Mellon University, 1994

    Google Scholar 

  5. S. Baluja and S. Davies. Using optimal dependency-trees for combinatorial optimization: Learning the structure of the search space. In D.H. Fisher, editor, Proceedings of the 1997 International Conference on Machine Learning, pages 30–38, Madison, Wisconsin, 1997 Morgan Kaufmann.

    Google Scholar 

  6. E. Bengoetxea, P. Larrañaga, I. Bloch, A. Perchant, and C. Boeres. Learning and simulation of Bayesian networks applied to inexact graph matching. Pattern Recognition, 35(12):2867–2880, 2002

    Article  MATH  Google Scholar 

  7. R. Blanco, I. Inza, and P. Larrañaga. Learning Bayesian networks in the space of structures by estimation of distribution algorithms. International Journal of Intelligent Systems, 18:205–220, 2003

    Article  MATH  Google Scholar 

  8. R. Blanco, P. Larrañaga, I. Inza, and B. Sierra. Selection of highly accurate genes for cancer classification by estimation of distribution algorithms. In P. Lucas, editor, Proceedings of the Bayesian Models in Medicine Workshop at the 8th Artificial Intelligence in Medicine in Europe AIME–2001, pages 29–34, 2001

    Google Scholar 

  9. T. Blickle and L. Thiele. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation, 4(4):361–394, 1996

    Google Scholar 

  10. P.A.N. Bosman. Design and Application of Iterated Density-Estimation Evolutionary Algorithms. PhD Thesis, University of Utrecht, Institute of Information and Computer Science, 2003

    Google Scholar 

  11. P.A.N. Bosman and T. Alderliesten. Evolutionary algorithms for medical simulations – a case study in minimally-invasive vascular interventions. In S.L. Smith and S. Cagnoni, editors, Proceedings of the Medical Applications of Genetic and Evolutionary Computation MedGEC Workshop at the Genetic and Evolutionary Computation Conference GECCO–2005, pages 125–132, New York, 2005. ACM Press

    Google Scholar 

  12. P.A.N. Bosman and E.D. De Jong. Learning probabilistic tree grammars for genetic programming. In X. Yao, E. Burke, J.A. Lozano, J. Smith, J.J. Merelo-Guervós, J.A. Bullinaria, J. Rowe, P.T.A. Kabán, and H.P. Schwefel, editors, Parallel Problem Solving From Nature-PPSN VII, pages 192–201, Springer, Berlin, 2004

    Google Scholar 

  13. P.A.N. Bosman and J. Grahl. Matching inductive search bias and problem structure in continuous estimation-of-distribution algorithms. European Journal of Operational Research, Forthcoming

    Google Scholar 

  14. P.A.N. Bosman and D. Thierens. Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature – PPSN VI, pages 767–776, Springer, Berlin, 2000

    Google Scholar 

  15. P.A.N. Bosman and D. Thierens. Advancing continuous IDEAs with mixture distributions and factorization selection metrics. In M. Pelikan and K. Sastry, editors, Proceedings of the Optimization by Building and Using Probabilistic Models OBUPM Workshop at the Genetic and Evolutionary Computation Conference GECCO–2001, pages 208–212, San Francisco, California, 2001. Morgan Kaufmann

    Google Scholar 

  16. P.A.N. Bosman and D. Thierens. Crossing the road to efficient IDEAs for permutation problems. In L. Spector, E.D. Goodman, A. Wu, W.B. Langdon, H.-M. Voigt, M. Gen, S. Sen, M. Dorigo, M.H. Garzon S. Pezeshk, and E. Burke, editors, Proceedings of the Genetic and Evolutionary Computation Conference – GECCO–2001, pages 219–226, San Francisco, California, 2001. Morgan Kaufmann

    Google Scholar 

  17. P.A.N. Bosman and D. Thierens. New IDEAs and more ICE by learning and using unconditional permutation factorizations. In Late-breaking Papers of the Genetic and Evolutionary Computation Conference GECCO–2001, pages 16–23, 2001

    Google Scholar 

  18. P.A.N. Bosman and D. Thierens. Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms. International Journal of Approximate Reasoning, 31:259–289, 2002

    Article  MATH  MathSciNet  Google Scholar 

  19. P.A.N. Bosman and D. Thierens. The balance between proximity and diversity in multi-objective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 7:174–188, 2003

    Article  Google Scholar 

  20. C.K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462–467, 1968

    Article  MATH  Google Scholar 

  21. C.A. Coello Coello. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Computer Methods in Applied Mechanics and Engineering, 191(11-12):1245–1287, 2002

    Article  MATH  MathSciNet  Google Scholar 

  22. M. Costa and E. Minisci. MOPED: A multi-objective parzen-based estimation of distribution for continuous problems. In C.M. Fonseca, P.J. Fleming, E. Zitzler, K. Deb, and L. Thiele, editors, Evolutionary Multi-Criterion Optimization Second International Conference – EMO 2003, pages 282–294, Springer, Berlin, 2003.

    Google Scholar 

  23. S.J. De Bonet, C.L. Isbell, and P. Viola. MIMIC: Finding optima by estimating probability densities. In M.C. Mozer, M.I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, Volume 9, page 424. The MIT Press, 1997

    Google Scholar 

  24. K. Deb and D. E. Goldberg. Analysing deception in trap functions. In L.D. Whitley, editor, Foundations of Genetic Algorithms 2, pages 93–108. Morgan Kaufmann, 1993

    Google Scholar 

  25. A.P. Dempster, N.M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistic Society, Series B 39:1–38, 1977

    MathSciNet  Google Scholar 

  26. E.I. Ducheyne, R. R. De Wulf, and B. De Baets. Using linkage learning for forest management planning. In Late-Breaking Papers of the Genetic and Evolutionary Computation Conference GECCO–2002, pages 109–114, 2002

    Google Scholar 

  27. J. Edmonds. Optimum branchings. Journal of Research of the National Bureau of Standards, 71b:233–240, 1976

    MathSciNet  Google Scholar 

  28. R. Exteberria and P. Larrañaga. Global optimization using bayesian networks. In Proceedings of the Second Symposium on Artificial Intelligence CIMAF-1999, pages 332–339, 1999

    Google Scholar 

  29. M. C. Fu. Optimization via simulation: A review. Annals of Operations Research, 53:199–248, 1994

    Article  MATH  MathSciNet  Google Scholar 

  30. M. Gallagher and M. Frean. Population-based continuous optimization, probabilistic modeling and the mean shift. Evolutionary Computation, 13(1):29–42, 2005

    Article  Google Scholar 

  31. M. Gallagher, M. Frean, and T. Downs. Real-valued evolutionary optimization using a flexible probability density estimator. In W. Banzhaf et al., editors, Proceedings of the Genetic and Evolutionary Computation Conference GECCO–1999, pages 840–846, San Francisco, California, 1999. Morgan Kaufmann

    Google Scholar 

  32. D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989

    MATH  Google Scholar 

  33. D. E. Goldberg. The Design of Innovation: Lessons from and for Competent Genetic Algorithms, volume 7 of Genetic Algorithms and Evolutionary Computation. Kluwer Academic, 2002

    Google Scholar 

  34. C. González, J.A. Lozano, and P. Larrañaga. Mathematical modelling of UMDAc algorithm with tournament selection. Behaviour on linear and quadratic functions. International Journal of Approximate Reasoning, 31(3):313–340, 2002

    Article  MATH  MathSciNet  Google Scholar 

  35. J. Grahl, P.A.N. Bosman, and F. Rothlauf. The correlation-triggered adaptive variance scaling IDEA (CT-AVS-IDEA). In M. Keijzer et. al., editor, Proceedings of the Genetic and Evolutionary Computation Conference GECCO–2006, pages 397–404. ACM Press, 2006

    Google Scholar 

  36. J. Grahl, S. Minner, and F. Rothlauf. Behavior of UMDAc with truncation selection on monotonous functions. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, volume 2, pages 2553–2559, Scotland, 2005. IEEE Press

    Google Scholar 

  37. N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001

    Article  Google Scholar 

  38. G. Harik. Learning Gene Linkage to Efficiently Solve Problems of Bounded Difficulty Using Genetic Algorithms. PhD Thesis, University of Michigan, Ann Arbor, Michigan, 1997

    Google Scholar 

  39. G. Harik. Linkage learning via probabilistic modeling in the ECGA. Technical Report 99010, IlliGAL, University of Illinois, Urbana, Illinois, 1999

    Google Scholar 

  40. G. Harik and D. E. Goldberg. Learning linkage. In R.K. Belew and M.D. Vose, editors, Foundations of Genetic Algorithms 4, pages 247–262, San Francisco, California, 1997. Morgan Kaufmann

    Google Scholar 

  41. G. Harik and D. E. Goldberg. Linkage learning through probabilistic expression. Computer Methods in Applied Mechanics and Engineering, 186:295–310, 2000

    Article  MATH  Google Scholar 

  42. G. Harik, F. Lobo, and D. E. Goldberg. The compact genetic algorithm. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, pages 523–528, IEEE Press, Piscataway, NJ, 1998

    Google Scholar 

  43. D. Heckerman and D. Geiger. Learning Bayesian networks: A unification for discrete and Gaussian domains. In P. Besnard and S. Hanks, editors, Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence UAI–1995, pages 274–284, Morgan Kaufmann, San Mateo, CA, 1995

    Google Scholar 

  44. D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. In R. Lopez de Mantaras and D. Poole, editors, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence UAI–1994, pages 293–301, Morgan Kaufmann, San Mateo, CA, 1994

    Google Scholar 

  45. J. H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, Michigan, 1975

    Google Scholar 

  46. M. I. Jordan. Learning in Graphical Models. MIT Press, Cambridge, MA, 1999

    Google Scholar 

  47. L. Kallel, B. Naudts, and C. R. Reeves. Properties of fitness functions and search landscapes. In B. Naudts L. Kallel and A. Rogers, editors, Theoretical Aspects of Evolutionary Computing, pages 175–206. Springer, Berlin, 2001

    Google Scholar 

  48. S. Kern, S.D. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos. Learning probability distributions in continuous evolutionary algorithms - a comparative review. Natural Computing, 3(1):77–112, 2004

    Article  MATH  MathSciNet  Google Scholar 

  49. N. Khan, D.E. Goldberg, and M. Pelikan. Multi-objective Bayesian optimization algorithm. Technical Report 2002009, IlliGAL, University of Illinois, Urbana, Illinois, 2002

    Google Scholar 

  50. P. Larrañaga, R. Etxeberria, J.A. Lozano, and J.M. Peña. Optimization in continuous domains by learning and simulation of Gaussian networks. In A.S. Wu, editor, Proceedings of the 2000 Genetic and Evolutionary Computation Conference Workshop Program, pages 201–204, 2000

    Google Scholar 

  51. P. Larrañaga and J. Lozano, editors. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, volume 2 of Genetic Algorithms and Evolutionary Computation. Kluwer Academic Publishers, 2001

    Google Scholar 

  52. P. Larrañaga, J.A. Lozano, V. Robles, A. Mendiburu, and P. de Miguel. Searching for the best permutation with estimation of distribution algorithms. In H.H. Hoos and T. Stuetzle, editors, Proceedings of the Workshop on Stochastic Search Algorithms at the IJCAI–2001, pages 7–14, San Francisco, California, 2002. Morgan Kaufmann

    Google Scholar 

  53. M. Laumanns and J. Ocenasek. Bayesian optimization algorithms for multi-objective optimization. In J.J. Merelo, P. Adamidis, H.G. Beyer, J. L.F.V. Martin, and H.P. Schwefel, editors, Parallel Problem Solving from Nature – PPSN VII, pages 298–307, Berlin, 2002. Springer

    Google Scholar 

  54. S. L. Lauritzen. Graphical Models. Clarendon Press, Oxford, 1996

    Google Scholar 

  55. A.M. Law and W. D. Kelton. Simulation Modeling and Analysis, 3rd edn. McGraw-Hill, New York, 2000

    Google Scholar 

  56. F. Mandl. Statistical Physics, 2nd edn. The Manchester Physics Series. Wiley, 1988

    Google Scholar 

  57. H. Mühlenbein and R. Höns. The estimation of distributions and the mimimum relative entropy principle. Evolutionary Computation, 13(1):1–27, 2005

    Article  Google Scholar 

  58. H. Mühlenbein and T. Mahnig. FDA – a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 7(4):353–376, 1999

    Google Scholar 

  59. H. Mühlenbein, T. Mahnig, and A. O. Rodriguez. Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5:215–247, 1999

    Article  MATH  Google Scholar 

  60. H. Mühlenbein and G. Paaß. From recombination of genes to the estimation of distributions I. Binary parameters. In Lecture Notes in Computer Science 1411: Parallel Problem Solving from Nature - PPSN IV, pages 178–187, Berlin, 1996. Springer

    Google Scholar 

  61. H. Mühlenbein and D. Schlierkamp-Voosen. Predictive models for the breeder genetic algorithm. Evolutionary Computation, 1(1):25–49, 1993

    Google Scholar 

  62. J. Ocenasek, S. Kern, N. Hansen, and P. Koumoutsakos. A mixed Bayesian optimization algorithm with variance adaptation. In J.A. Lozano J. Smith J.J. Merelo Guervós J.A. Bullinaria J. Rowe P. Tino A. Kaban X. Yao, E. Burke and H.P. Schwefel, editors, Parallel Problem Solving from Nature – PPSN VIII, pages 352–361, Springer, Berlin, 2004

    Google Scholar 

  63. M. Pelikan. Bayesian Optimization Algorithm: From Single Level to Hierarchy. PhD Thesis, University of Illinois at Urbana-Champaign, Dept. of Computer Science, Urbana, IL, 2002

    Google Scholar 

  64. M. Pelikan and D. E. Goldberg. Escaping hierarchical traps with competent genetic algorithms. In L. Spector, E.D. Goodman, A. Wu, W.B. Langdon, H.M. Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M.H. Garzon, and E. Burke, editors, Proceedings of the GECCO–2001 Genetic and Evolutionary Computation Conference

    Google Scholar 

  65. M. Pelikan and D. E. Goldberg. Hierarchical BOA solves ising spin glasses and MAXSAT. In E. Cantú-Paz, J.A. Foster, K. Deb, D. Davis, R. Roy, U.-M. O’Reilly, H.-G. Beyer, R. Standish, G. Kendall, S. Wilson, M. Harman, J. Wegener, D. Dasgupta, M.A. Potter, A.C. Schultz, K. Dowsland, N. Jonoska, and J. Miller, editors, Proceedings of the GECCO–2003 Genetic and Evolutionary Computation Conference, pages 1271–1282, Springer, Berlin, 2003

    Google Scholar 

  66. M. Pelikan, D.E. Goldberg, and E. Cantú-Paz. Bayesian optimization algorithm, population sizing, and time to convergence. In Proceedings of the Genetic and Evolutionary Computation Conference GECCO–2000

    Google Scholar 

  67. M. Pelikan, D.E. Goldberg, and E. Cantú-Paz. BOA: The Bayesian optimization algorithm. 1999

    Google Scholar 

  68. M. Pelikan, D.E. Goldberg, and K. Sastry. Bayesian optimization algorithm, decision graphs and occam’s razor. In L. Spector, E.D. Goodman, A. Wu, W.B. Langdon, H.M. Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M.H. Garzon, and E. Burke, editors, Proceedings of the GECCO–2001 Genetic and Evolutionary Computation Conference

    Google Scholar 

  69. M. Pelikan and H. Mühlenbein. The bivariate marginal distribution algorithm. In R. Roy, T. Furuhashi, and P.K. Chawdhry, editors, Advances in Soft Computing – Engineering Design and Manufacturing, pages 521–535, Springer, London, 1999

    Google Scholar 

  70. M. Pelikan, K. Sastry, M.V. Butz, and D. E. Goldberg. Hierarchical BOA on random decomposable problems. IlliGAL Report 2006002, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign, IL, 2006

    Google Scholar 

  71. M. Pelikan, K. Sastry, and E. Cantú-Paz, editors. Scalable optimization via probabilistic modeling: From algorithms to applications. Springer, 2006

    Google Scholar 

  72. M. Pelikan, K. Sastry, and D. E. Goldberg. Scalability of the bayesian optimization algorithm. International Journal of Approximate Reasoning, 31(3):221–258, 2003

    Article  MathSciNet  Google Scholar 

  73. M. Pelikan, K. Sastry, and D. E. Goldberg. Multiobjective hBOA, clustering, and scalability. In Proceedings of the Genetic and Evolutionary Computation Conference GECCO–2005, Volume 1, pages 663–670, ACM Press, New York, 2005

    Google Scholar 

  74. V. Robles, P. de Miguel, and P. Larrañaga. Solving the traveling salesman problem with EDAs. In P. Larrañaga and J.A. Lozano, editors, Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation. Kluwer Academic, London, 2001

    Google Scholar 

  75. F. Rothlauf. Representations for Genetic and Evolutionary Algorithms. Number 104 in Studies on Fuzziness and Soft Computing. Springer, Berlin, 2002

    Google Scholar 

  76. S. Rudlof and M. Köppen. Stochastic hill climbing with learning by vectors of normal distributions. In T. Furuhashi, editor, Proceedings of the First Online Workshop on Soft Computing (WSC1), pages 60–70, Nagoya, Japan, 1996. Nagoya University

    Google Scholar 

  77. R.P. Salustowicz and J. Schmidhuber. Probabilistic incremental program evolution. Evolutionary Computation, 5(2):123–141, 1997

    Google Scholar 

  78. K. Sastry and D. E. Goldberg. Genetic Programming Theory and Practice, Chapter Probabilistic model building and competent genetic programming, pages 205–220. Kluwer Academic, Boston, MA, 2003

    Google Scholar 

  79. K. Sastry, M. Pelikan, and D. E. Goldberg. Efficiency enhancement of probabilistic model building algorithms. In Proceedings of the Optimization by Building and Using Probabilistic Models Workshop at the Genetic and Evolutionary Computation Conference, 2004

    Google Scholar 

  80. M. Sebag and A. Ducoulombier. Extending population-based incremental learning to continuous search spaces. In A.E. Eiben, T. Bäck, M. Schoenauer, and H.P. Schwefel, editors, Parallel Problem Solving from Nature – PPSN V

    Google Scholar 

  81. I. Servet, L. Trave-Massuyes, and D. Stern. Telephone network traffic overloading diagnosis and evolutionary computation technique. In J.K. Hao, E. Lutton, E.M.A. Ronald, M. Schoenauer, and D. Snyers, editors, Proceedings of Artificial Evolution ’97, pages 137–144, Springer, Berlin, 1997

    Google Scholar 

  82. B. Sierra, E. Lazkano, I. Inza, M. Merino, P. Larrañaga, and J. Quiroga. Prototype selection and feature subset selection by estimation of distribution algorithms. a case study in the survival of cirrhotic patients treated with tips. In A.L. Rector et al., editors, Proceedings of the 8th Artificial Intelligence in Medicine in Europe AIME–2001, pages 20–29, Springer, Berlin, 2001

    Google Scholar 

  83. G. Syswerda. Simulated crossover in genetic algorithms. In L.D. Whitley, editor, Proceedings of the Second Workshop on Foundations of Genetic Algorithms, pages 239–255, Morgan Kaufmann, San Mateo, CA, 1993

    Google Scholar 

  84. M. M. Tatsuoka. Multivariate Analysis: Techniques for Educational and Psychological Research. John Wiley & Sons Inc., New York, 1971

    MATH  Google Scholar 

  85. D. Thierens. Analysis and Design of Genetic Algorithms. PhD Thesis, Leuven, Belgium: Katholieke Universiteit Leuven, 1995

    Google Scholar 

  86. D. Thierens. Scalability problems of simple genetic algorithms. Evolutionary Computation, 7(4):331–352, 1999

    Google Scholar 

  87. S. Tsutsui. Probabilistic model-building genetic algorithms in permutation representation domain using edge histogram. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.P. Schwefel, editors, Parallel Problem Solving from Nature – PPSN VII, pages 224–233, Springer, Berlin, 2002

    Google Scholar 

  88. B. Yuan and M. Gallagher. On the importance of diversity maintenance in estimation of distribution algorithms. In H.G. Beyer, editor, Proceedings of the Genetic and Evolutionary Computation Conference GECCO–2005, Volume 1, pages 719–726, ACM Press, Washington DC, 2005

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Grahl, J., Minner, S., Bosman, P. (2007). Learning Structure Illuminates Black Boxes – An Introduction to Estimation of Distribution Algorithms. In: Siarry, P., Michalewicz, Z. (eds) Advances in Metaheuristics for Hard Optimization. Natural Computing Series. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72960-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72960-0_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72959-4

  • Online ISBN: 978-3-540-72960-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics