skip to main content
10.1145/2330163.2330200acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Influence of selection on structure learning in markov network EDAs: an empirical study

Authors Info & Claims
Published:07 July 2012Publication History

ABSTRACT

Learning a good model structure is important to the efficient solving of problems by estimation of distribution algorithms. In this paper we present the results of a series of experiments, applying a structure learning algorithm for undirected probabilistic graphical models based on statistical dependency tests to three fitness functions with different selection operators, proportions and pressures. The number of spurious interactions found by the algorithm are measured and reported. Truncation selection, and its complement (selecting only low fitness solutions) prove quite robust, resulting in a similar number of spurious dependencies regardless of selection pressure. In contrast, tournament and fitness proportionate selection are strongly affected by the selection proportion and pressure.

References

  1. S. Baluja and S. Davies. Using optimal dependency-trees for combinational optimization. In ICML '97: Proceedings of the Fourteenth International Conference on Machine Learning, pages 30--38. Morgan Kaufmann Publishers Inc, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. Boslaugh and P. A. Watters. Statistics: A Desktop Quick Reference. O'Reilly, 2008.Google ScholarGoogle Scholar
  3. A. E. I. Brownlee. Multivariate Markov Networks for Fitness Modelling in an Estimation of Distribution Algorithm. PhD thesis, Robert Gordon University, Aberdeen, May 2009.Google ScholarGoogle Scholar
  4. A. E. I. Brownlee, J. A. W. McCall, and M. Pelikan. Influence of selection on structure learning in Markov network EDAs: An empirical study. Technical Report MEDAL Report No. 2012006, Missouri Estimation of Distribution Algorithms Laboratory, Univ. of Missouri in St. Louis, 2012. Online: http://medal-lab.org/files/2012006.pdf.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. E. I. Brownlee, J. A. W. McCall, S. K. Shakya, and Q. Zhang. Structure Learning and Optimisation in a Markov-network based Estimation of Distribution Algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2009), pages 447--454, Trondheim, Norway, 2009. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. E. I. Brownlee, J. A. W. McCall, Q. Zhang, and D. Brown. Approaches to Selection and their effect on Fitness Modeling in an Estimation of Distribution Algorithm. In Proc. of the IEEE World Congress on Computational Intelligence (CEC 2008), pages 2621--2628, Hong Kong, China, 2008. IEEE Press.Google ScholarGoogle ScholarCross RefCross Ref
  7. J. S. de Bonet, C. L. Isbell Jr., and P. Viola. MIMIC: Finding optima by estimating probability densities. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, 1997.Google ScholarGoogle Scholar
  8. C. Echegoyen, J. Lozano, R. Santana, and P. Larranaga. Exact Bayesian network learning in estimation of distribution algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2007), pages 1051--1058, sept. 2007.Google ScholarGoogle ScholarCross RefCross Ref
  9. M. Hauschild and M. Pelikan. An introduction and survey of estimation of distribution algorithms. Swarm and Evolutionary Computation, 1(3):111 -- 128, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  10. M. Hauschild, M. Pelikan, K. Sastry, and C. Lima. Analyzing probabilistic models in hierarchical BOA. IEEE Transactions on Evolutionary Computation, 13:1199--1217, December 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Hong, G. Zhu, S. Kwong, and Q. Ren. Estimation of distribution algorithms making use of both high quality and low quality individuals. In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2009), pages 1806--1813, August 2009.Google ScholarGoogle ScholarCross RefCross Ref
  12. L. Kallel, B. Naudts, and R. Reeves. Properties of fitness functions and search landscapes. In L. Kallel, B. Naudts, and A. Rogers, editors, Theoretical Aspects of Evolutionary Computing, pages 177--208. Springer Verlag, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Larranaga and J. A. Lozano. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Boston, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. X. Li, S. Mabu, and K. Hirasawa. Use of infeasible individuals in probabilistic model building genetic network programming. In Proc. of the 13th Genetic and Evolutionary Computation COnference (GECCO 2011), pages 601--608, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. F. Lima, F. G. Lobo, M. Pelikan, and D. E. Goldberg. Model accuracy in the Bayesian optimization algorithm. Soft Computing, 15(7):1351--1371, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. A. Lozano, P. Larranaga, I. Inza, and E. Bengoetxea. Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms (Studies in Fuzziness and Soft Computing). Springer-Verlag, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. L. A. Marascuilo and M. McSweeney. Nonparametric and Distribution-Free Methods for Social Sciences. Brooks / Cole Publishing Company, California, 1977.Google ScholarGoogle Scholar
  18. T. Miquélez, E. Bengoetxea, and P. Larranaga. Evolutionary computation based on Bayesian classifiers. Int'l Jnl. of Applied Mathematics and Computer Science, 14(3):101--115, 2004.Google ScholarGoogle Scholar
  19. H. Mühlenbein and T. Mahnig. Evolutionary optimization using graphical models. New Gen. Comput., 18(2):157--166, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. H. Mühlenbein and D. Schlierkamp-Voosen. Predictive models for the breeder genetic algorithm I. continuous parameter optimization. Evolutionary Computuation, 1(1):25--49, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Munetomo, N. Murao, and K. Akama. Introducing assignment functions to Bayesian optimization algorithms. Info. Sciences, 178(1):152 -- 163, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. M. Pelikan. Bayesian optimization algorithm: from single level to hierarchy. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, IL, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Pelikan and H. Mühlenbein. The bivariate marginal distribution algorithm. In R. Roy, T. Furuhashi, and P. K. Chawdhry, editors, Advances in Soft Computing - Engineering Design and Manufacturing, pages 521--535, 1999.Google ScholarGoogle Scholar
  24. M. Pelikan, K. Sastry, and D. E. Goldberg. Scalability of the Bayesian optimization algorithm. Int'l. Jnl. of Approximate Reasoning, 31(3):221 -- 258, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  25. E. Radetic and M. Pelikan. Spurious dependencies and EDA scalability. In Genetic and Evolutionary Computation Conference, pages 303--310, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. R. Santana. A Markov network based factorized distribution algorithm for optimization. In Proceedings of the 14th European Conference on Machine Learning; Lecture Notes in Artificial Intelligence, volume 2837, pages 337--348, Berlin, 2003. Springer-Verlag.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Santana. Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation, 13(1):67--97, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. R. Santana, C. Bielza, J. A. Lozano, and P. Larranaga. Mining probabilistic models learned by EDAs in the optimization of multi-objective problems. In Proceedings of the 11th Annual conference on Genetic and evolutionary computation (GECCO 2009), pages 445--452, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. R. Santana, P. Larranaga, and J. A. Lozano. Interactions and dependencies in estimation of distribution algorithms. In Proc. of the IEEE Congress on Evolutionary Computation (CEC 2005), volume 1, pages 1418--1425. IEEE Press, 2--4 Sept. 2005.Google ScholarGoogle ScholarCross RefCross Ref
  30. R. Santana, P. Larranaga, and J. A. Lozano. Challenges and open problems in discrete EDAs. Technical Report EHU-KZAA-IK-1/07, Department of Computer Science and Artificial Intelligence, University of the Basque Country, October 2007.Google ScholarGoogle Scholar
  31. R. Santana, P. Larranaga, and J. A. Lozano. Research topics on discrete estimation of distribution algorithms. Memetic Computing, 1(1):35--54, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  32. S. Shakya and R. Santana. An EDA based on local Markov property and Gibbs sampling. In Proceedings of the Genetic and Evolutionary Computation COnference (GECCO 2008), pages 475--476. ACM Press, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. S. Shakya and R. Santana, editors. Markov Networks in Evolutionary Computation. Springer, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. S. K. Shakya, A. E. I. Brownlee, J. A. W. McCall, F. Fournier, and G. Owusu. A fully multivariate DEUM algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2009), pages 479--486. IEEE Press, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. S. K. Shakya and J. A. W. McCall. Optimization by estimation of distribution with DEUM framework based on Markov random fields. International Journal of Automation and Computing, 4(3):262--272, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  36. D. Wallin and C. Ryan. Using over-sampling in a Bayesian classifier EDA to solve deceptive and hierarchical problems. In Proc. of the IEEE Congress on Evolutionary Computation, pages 1660--1667, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. T.-L. Yu, K. Sastry, and D. E. Goldberg. Linkage learning, overlapping building blocks, and systematic strategy for scalable recombination. In Proceedings of the 2005 conference on Genetic and evolutionary computation, GECCO '05, pages 1217--1224, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. T.-L. Yu, K. Sastry, D. E. Goldberg, and M. Pelikan. Population sizing for entropy-based model building in discrete estimation of distribution algorithms. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, GECCO '07, pages 601--608, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Influence of selection on structure learning in markov network EDAs: an empirical study

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GECCO '12: Proceedings of the 14th annual conference on Genetic and evolutionary computation
      July 2012
      1396 pages
      ISBN:9781450311779
      DOI:10.1145/2330163

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 July 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,669of4,410submissions,38%

      Upcoming Conference

      GECCO '24
      Genetic and Evolutionary Computation Conference
      July 14 - 18, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader