Skip to main content

Advertisement

Log in

A Neuro-evolutionary Hyper-heuristic Approach for Constraint Satisfaction Problems

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Constraint satisfaction problems represent an important topic of research due to their multiple applications in various areas of study. The most common way to solve this problem involves the use of heuristics that guide the search into promising areas of the space. In this article, we present a novel way to combine the strengths of distinct heuristics to produce solution methods that perform better than such heuristics on a wider range of instances. The methodology proposed produces neural networks that represent hyper-heuristics for variable ordering in constraint satisfaction problems. These neural networks are generated and trained by running a genetic algorithm that has the task of evolving the topology of the networks and some of their learning parameters. The results obtained suggest that the produced neural networks represent a feasible alternative for coding hyper-heuristics that control the use of different heuristics in such a way that the cost of the search is minimized.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. http://www.cril.univ-artois.fr/~lecoutre/research/benchmarks/geom.tgz.

  2. http://www.cril.univ-artois.fr/~lecoutre/research/benchmarks/composed-25-10-20.tgz.

  3. http://www.cril.univ-artois.fr/~lecoutre/research/benchmarks/jobShop-e0ddr1.tgz.

  4. http://www.cril.univ-artois.fr/~lecoutre/research/benchmarks/rlfapScens.tgz.

  5. http://arxiv.org/pdf/0902.2362v1.

References

  1. Achlioptas D, Molloy MSO, Kirousis LM, Stamatiou YC, Kranakis E, Krizanc D. Random constraint satisfaction: a more accurate picture. Constraints. 2001;6(4):329–44.

    Article  Google Scholar 

  2. Berlier J, McCollum J. A constraint satisfaction algorithm for microcontroller selection and pin assignment. In: Proceedings of the IEEE SoutheastCon 2010 (SoutheastCon); 2010. p. 348–51.

  3. Bittle SA, Fox MS. Learning and using hyper-heuristics for variable and value ordering in constraint satisfaction problems. In: Proceedings of the 11th annual conference companion on genetic and evolutionary computation conference: late breaking papers. ACM; 2009. p. 2209–12.

  4. Boussemart F, Hemery F, Lecoutre C, Sais L. Boosting systematic search by weighting constraints. In: European conference on artificial intelligence (ECAI’04); 2004. p. 146–50.

  5. Brelaz D. New methods to colour the vertices of a graph. Commun ACM. 1979;22(4):251–56.

    Article  Google Scholar 

  6. Burke E, Hart E, Kendall G, Newall J, Ross P, Shulenburg S. Hyper-heuristics: an emerging direction in modern research technology. In: Handbook of metaheuristics. Kluwer Academic Publishers; 2003. p. 457–74.

  7. Burke E, Kendall G, O’Brien R, Redrup D, Soubeiga E. An ant algorithm hyper-heuristic. In: Proceedings of the fifth metaheuristics international conference (MIC’03), vol. 10; 2003. p. 1–10.

  8. Burke EK, Kendall G, Soubeiga E. A tabu-search hyperheuristic for timetabling and rostering. J Heuristics. 2003;6(9):451–70.

    Article  Google Scholar 

  9. Crawford B, Soto R, Castro C, Monfroy E. A hyperheuristic approach for dynamic enumeration strategy selection in constraint satisfaction. In: Proceedings of the 4th international conference on interplay between natural and artificial computation: new challenges on bioinspired applications, vol. Part II, IWINAC’11. Berlin: Springer; 2011. p. 295–304.

  10. Dechter R. Constraint networks. In: Encyclopedia of artificial intelligence. Wiley; 1992. p. 276–86.

  11. Dunkin N, Allen S. Frequency assignment problems: representations and solutions. Tech. Rep. CSD-TR-97-14, University of London (1997)

  12. Garey MR, Johnson DS. Computers and intractability; a guide to the theory of NP-completeness. New York: W. H. Freeman & Co; 1979.

    Google Scholar 

  13. Gent I, MacIntyre E, Prosser P, Smith B, Walsh T. An empirical study of dynamic variable ordering heuristics for the constraint satisfaction problem. In: Proceedings of the international conference on principles and practice of constraint programming (CP’96); 1996. p. 179–93.

  14. Haralick RM, Elliott GL. Increasing tree search efficiency for constraint satisfaction problems. Artif Intell. 1980;14:263–313.

    Article  Google Scholar 

  15. Hell P, Nesetril J. Colouring, constraint satisfaction, and complexity. Comput Sci Rev. 2008;2(3):143–63.

    Article  Google Scholar 

  16. Jönsson H, Söderberg B. An information-based neural approach to generic constraint satisfaction. Artif Intell. 2002;142(1):1–17.

    Article  Google Scholar 

  17. Lourenço N, Pereira FB, Costa E. The importance of the learning conditions in hyper-heuristics. In: Proceedings of the 15th annual conference on genetic and evolutionary computation, GECCO ’13. ACM, New York; 2013. p. 1525–32.

  18. Maashi M, Özcan E, Kendall G. A multi-objective hyper-heuristic based on choice function. Expert Syst Appl. 2014;41(9):4475–93.

    Article  Google Scholar 

  19. Mackworth AK. Consistency in networks of relations. Artif Intell. 1977;8(1):99–118.

    Article  Google Scholar 

  20. Marchiori E, Steenbeek A. A genetic local search algorithm for random binary constraint satisfaction problems. In: Proceedings of the ACM symposium on applied computing (2000). p. 458–62 .

  21. Minton S, Johnston MD, Phillips A, Laird P. Minimizing conflicts: a heuristic repair method for CSP and scheduling problems. Artif Intell. 1992;58:161–205.

    Article  Google Scholar 

  22. Misir M, Verbeeck K, Causmaecker P, Berghe G. A new hyper-heuristic as a general problem solver: an implementation in HyFlex. J Sched. 2013;16(3):291–311.

    Article  Google Scholar 

  23. Nakano T, Nagamatu M. Lagrange neural network for solving CSP which includes linear inequality constraints. In: Duch W, Kacprzyk J, Oja E, Zadrozny S, editors. Artificial neural networks: formal models and their applications (ICANN’05), vol. 3697, Lecture Notes in Computer Science. Berlin: Springer; 2005. p. 943–8.

  24. Ochoa G, Hyde M, Curtois T, Vazquez-Rodriguez J, Walker J, Gendreau M, Kendall G, McCollum B, Parkes A, Petrovic S, Burke E. Hyflex: a benchmark framework for cross-domain heuristic search. In: Hao JK, Middendorf M, editors. European conference on evolutionary computation in combinatorial optimisation (EvoCOP 2012), vol. 7245, LNCS. Heidelberg: Springer; 2012. p. 136–47.

  25. O’Mahony E, Hebrard E, Holland A, Nugent C, O’Sullivan B. Using case-based reasoning in an algorithm portfolio for constraint solving. In: Proceedings of the 19th Irish conference on artificial intelligence and cognitive science (2008).

  26. Ortiz-Bayliss JC, Özcan E, Parkes AJ, Terashima-Marín H. Mapping the performance of heuristics for constraint satisfaction. In: Proceedings of the 2010 IEEE congress on evolutionary computation (CEC’10). IEEE Press; 2010. p. 1–8.

  27. Ortiz-Bayliss JC, Terashima-Marín H, Conant-Pablos SE. Learning vector quantization for variable ordering in constraint satisfaction problems. Pattern Recogn Lett. 2013;34(4):423–32.

    Article  Google Scholar 

  28. Ortiz-Bayliss JC, Terashima-Marín H, Ross P, Conant-Pablos SE. Evolution of neural networks topologies and learning parameters to produce hyper-heuristics for constraint satisfaction problems. In: Proceedings of the 13th annual conference companion on genetic and evolutionary computation (GECCO’11). ACM; 2011. p. 261–62.

  29. Pappa GL, Ochoa G, Hyde MR, Freitas AA, Woodward J, Swan J. Contrasting meta-learning and hyper-heuristic research: the role of evolutionary algorithms. Genet Program Evolvable Mach. 2014;15(1):3–35.

    Article  Google Scholar 

  30. Petrovic S, Epstein SL. Random subsets support learning a mixture of heuristics. Int J Artif Intell Tools. 2008;17(3):501–20.

    Article  Google Scholar 

  31. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Cambridge: MIT Press; 1988.

    Google Scholar 

  32. Smith BM, Grant SA. Sparse constraint graphs and exceptionally hard problems. In: Proceedings of the international joint conferences on artificial intelligence (IJCAI’95); 1995. p. 646–51

  33. Soto R, Crawford B, Monfroy E, Bustos V. Using autonomous search for generating good enumeration strategy blends in constraint programming. In: ICCSA (3)’12; 2012. p. 607–17.

  34. Swan J, Woodward J, Özcan E, Kendall G, Burke E. Searching the hyper-heuristic design space. Cogn Comput. 2014;6(1):66–73.

    Article  Google Scholar 

  35. Topcuoglu HR, Ucar A, Altin L. A hyper-heuristic based framework for dynamic optimization problems. Appl Soft Comput. 2014;19:236–51.

    Article  Google Scholar 

  36. Tsang E. Foundations of constraint satisfaction. New York: Academic Press Limited; 1993.

    Google Scholar 

  37. Tsang E, Kwan A. Mapping constraint satisfaction problems to algorithms and heuristics. Tech. Rep. CSM-198, Department of Computer Sciences, University of Essex (1993).

  38. Wallace R. Analysis of heuristic synergies. In: Hnich B, Carlsson M, Fages F, Rossi F, editors. Recent advances in constraints, vol. 3978, Lecture Notes in Computer Science. Berlin: Springer; 2006. p. 73–87.

  39. Williams CP, Hogg T. Using deep structure to locate hard problems. In: Proceedings of AAAI’92; 1992. p. 472–7.

  40. Wu Y, McCall J, Corne D, Regnier-Coudert O. Landscape analysis for hyperheuristic bayesian network structure learning on unseen problems. In: 2012 IEEE congress on evolutionary computation (CEC); 2012. p. 1–8.

Download references

Acknowledgments

This research was supported in part by ITESM Strategic Project PRY075, ITESM Research Group with Strategic Focus in Intelligent Systems and CONACyT Basic Science Projects under Grants 99695 and 241461.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Carlos Ortiz-Bayliss.

Ethics declarations

Conflict of interest

José Carlos Ortiz-Bayliss, Hugo Terashima-Marín, and Santiago Enrique Conant-Pablos declare that they have no conflict of interest.

Informed Consent

All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008 (5). Additional informed consent was obtained from all patients for which identifying information is included in this article.

Human and Animal Rights

This article does not contain any studies with human or animal subjects performed by the any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ortiz-Bayliss, J.C., Terashima-Marín, H. & Conant-Pablos, S.E. A Neuro-evolutionary Hyper-heuristic Approach for Constraint Satisfaction Problems. Cogn Comput 8, 429–441 (2016). https://doi.org/10.1007/s12559-015-9368-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-015-9368-2

Keywords

Navigation