Abstract
Constraint satisfaction problems represent an important topic of research due to their multiple applications in various areas of study. The most common way to solve this problem involves the use of heuristics that guide the search into promising areas of the space. In this article, we present a novel way to combine the strengths of distinct heuristics to produce solution methods that perform better than such heuristics on a wider range of instances. The methodology proposed produces neural networks that represent hyper-heuristics for variable ordering in constraint satisfaction problems. These neural networks are generated and trained by running a genetic algorithm that has the task of evolving the topology of the networks and some of their learning parameters. The results obtained suggest that the produced neural networks represent a feasible alternative for coding hyper-heuristics that control the use of different heuristics in such a way that the cost of the search is minimized.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
References
Achlioptas D, Molloy MSO, Kirousis LM, Stamatiou YC, Kranakis E, Krizanc D. Random constraint satisfaction: a more accurate picture. Constraints. 2001;6(4):329–44.
Berlier J, McCollum J. A constraint satisfaction algorithm for microcontroller selection and pin assignment. In: Proceedings of the IEEE SoutheastCon 2010 (SoutheastCon); 2010. p. 348–51.
Bittle SA, Fox MS. Learning and using hyper-heuristics for variable and value ordering in constraint satisfaction problems. In: Proceedings of the 11th annual conference companion on genetic and evolutionary computation conference: late breaking papers. ACM; 2009. p. 2209–12.
Boussemart F, Hemery F, Lecoutre C, Sais L. Boosting systematic search by weighting constraints. In: European conference on artificial intelligence (ECAI’04); 2004. p. 146–50.
Brelaz D. New methods to colour the vertices of a graph. Commun ACM. 1979;22(4):251–56.
Burke E, Hart E, Kendall G, Newall J, Ross P, Shulenburg S. Hyper-heuristics: an emerging direction in modern research technology. In: Handbook of metaheuristics. Kluwer Academic Publishers; 2003. p. 457–74.
Burke E, Kendall G, O’Brien R, Redrup D, Soubeiga E. An ant algorithm hyper-heuristic. In: Proceedings of the fifth metaheuristics international conference (MIC’03), vol. 10; 2003. p. 1–10.
Burke EK, Kendall G, Soubeiga E. A tabu-search hyperheuristic for timetabling and rostering. J Heuristics. 2003;6(9):451–70.
Crawford B, Soto R, Castro C, Monfroy E. A hyperheuristic approach for dynamic enumeration strategy selection in constraint satisfaction. In: Proceedings of the 4th international conference on interplay between natural and artificial computation: new challenges on bioinspired applications, vol. Part II, IWINAC’11. Berlin: Springer; 2011. p. 295–304.
Dechter R. Constraint networks. In: Encyclopedia of artificial intelligence. Wiley; 1992. p. 276–86.
Dunkin N, Allen S. Frequency assignment problems: representations and solutions. Tech. Rep. CSD-TR-97-14, University of London (1997)
Garey MR, Johnson DS. Computers and intractability; a guide to the theory of NP-completeness. New York: W. H. Freeman & Co; 1979.
Gent I, MacIntyre E, Prosser P, Smith B, Walsh T. An empirical study of dynamic variable ordering heuristics for the constraint satisfaction problem. In: Proceedings of the international conference on principles and practice of constraint programming (CP’96); 1996. p. 179–93.
Haralick RM, Elliott GL. Increasing tree search efficiency for constraint satisfaction problems. Artif Intell. 1980;14:263–313.
Hell P, Nesetril J. Colouring, constraint satisfaction, and complexity. Comput Sci Rev. 2008;2(3):143–63.
Jönsson H, Söderberg B. An information-based neural approach to generic constraint satisfaction. Artif Intell. 2002;142(1):1–17.
Lourenço N, Pereira FB, Costa E. The importance of the learning conditions in hyper-heuristics. In: Proceedings of the 15th annual conference on genetic and evolutionary computation, GECCO ’13. ACM, New York; 2013. p. 1525–32.
Maashi M, Özcan E, Kendall G. A multi-objective hyper-heuristic based on choice function. Expert Syst Appl. 2014;41(9):4475–93.
Mackworth AK. Consistency in networks of relations. Artif Intell. 1977;8(1):99–118.
Marchiori E, Steenbeek A. A genetic local search algorithm for random binary constraint satisfaction problems. In: Proceedings of the ACM symposium on applied computing (2000). p. 458–62 .
Minton S, Johnston MD, Phillips A, Laird P. Minimizing conflicts: a heuristic repair method for CSP and scheduling problems. Artif Intell. 1992;58:161–205.
Misir M, Verbeeck K, Causmaecker P, Berghe G. A new hyper-heuristic as a general problem solver: an implementation in HyFlex. J Sched. 2013;16(3):291–311.
Nakano T, Nagamatu M. Lagrange neural network for solving CSP which includes linear inequality constraints. In: Duch W, Kacprzyk J, Oja E, Zadrozny S, editors. Artificial neural networks: formal models and their applications (ICANN’05), vol. 3697, Lecture Notes in Computer Science. Berlin: Springer; 2005. p. 943–8.
Ochoa G, Hyde M, Curtois T, Vazquez-Rodriguez J, Walker J, Gendreau M, Kendall G, McCollum B, Parkes A, Petrovic S, Burke E. Hyflex: a benchmark framework for cross-domain heuristic search. In: Hao JK, Middendorf M, editors. European conference on evolutionary computation in combinatorial optimisation (EvoCOP 2012), vol. 7245, LNCS. Heidelberg: Springer; 2012. p. 136–47.
O’Mahony E, Hebrard E, Holland A, Nugent C, O’Sullivan B. Using case-based reasoning in an algorithm portfolio for constraint solving. In: Proceedings of the 19th Irish conference on artificial intelligence and cognitive science (2008).
Ortiz-Bayliss JC, Özcan E, Parkes AJ, Terashima-Marín H. Mapping the performance of heuristics for constraint satisfaction. In: Proceedings of the 2010 IEEE congress on evolutionary computation (CEC’10). IEEE Press; 2010. p. 1–8.
Ortiz-Bayliss JC, Terashima-Marín H, Conant-Pablos SE. Learning vector quantization for variable ordering in constraint satisfaction problems. Pattern Recogn Lett. 2013;34(4):423–32.
Ortiz-Bayliss JC, Terashima-Marín H, Ross P, Conant-Pablos SE. Evolution of neural networks topologies and learning parameters to produce hyper-heuristics for constraint satisfaction problems. In: Proceedings of the 13th annual conference companion on genetic and evolutionary computation (GECCO’11). ACM; 2011. p. 261–62.
Pappa GL, Ochoa G, Hyde MR, Freitas AA, Woodward J, Swan J. Contrasting meta-learning and hyper-heuristic research: the role of evolutionary algorithms. Genet Program Evolvable Mach. 2014;15(1):3–35.
Petrovic S, Epstein SL. Random subsets support learning a mixture of heuristics. Int J Artif Intell Tools. 2008;17(3):501–20.
Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Cambridge: MIT Press; 1988.
Smith BM, Grant SA. Sparse constraint graphs and exceptionally hard problems. In: Proceedings of the international joint conferences on artificial intelligence (IJCAI’95); 1995. p. 646–51
Soto R, Crawford B, Monfroy E, Bustos V. Using autonomous search for generating good enumeration strategy blends in constraint programming. In: ICCSA (3)’12; 2012. p. 607–17.
Swan J, Woodward J, Özcan E, Kendall G, Burke E. Searching the hyper-heuristic design space. Cogn Comput. 2014;6(1):66–73.
Topcuoglu HR, Ucar A, Altin L. A hyper-heuristic based framework for dynamic optimization problems. Appl Soft Comput. 2014;19:236–51.
Tsang E. Foundations of constraint satisfaction. New York: Academic Press Limited; 1993.
Tsang E, Kwan A. Mapping constraint satisfaction problems to algorithms and heuristics. Tech. Rep. CSM-198, Department of Computer Sciences, University of Essex (1993).
Wallace R. Analysis of heuristic synergies. In: Hnich B, Carlsson M, Fages F, Rossi F, editors. Recent advances in constraints, vol. 3978, Lecture Notes in Computer Science. Berlin: Springer; 2006. p. 73–87.
Williams CP, Hogg T. Using deep structure to locate hard problems. In: Proceedings of AAAI’92; 1992. p. 472–7.
Wu Y, McCall J, Corne D, Regnier-Coudert O. Landscape analysis for hyperheuristic bayesian network structure learning on unseen problems. In: 2012 IEEE congress on evolutionary computation (CEC); 2012. p. 1–8.
Acknowledgments
This research was supported in part by ITESM Strategic Project PRY075, ITESM Research Group with Strategic Focus in Intelligent Systems and CONACyT Basic Science Projects under Grants 99695 and 241461.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
José Carlos Ortiz-Bayliss, Hugo Terashima-Marín, and Santiago Enrique Conant-Pablos declare that they have no conflict of interest.
Informed Consent
All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008 (5). Additional informed consent was obtained from all patients for which identifying information is included in this article.
Human and Animal Rights
This article does not contain any studies with human or animal subjects performed by the any of the authors.
Rights and permissions
About this article
Cite this article
Ortiz-Bayliss, J.C., Terashima-Marín, H. & Conant-Pablos, S.E. A Neuro-evolutionary Hyper-heuristic Approach for Constraint Satisfaction Problems. Cogn Comput 8, 429–441 (2016). https://doi.org/10.1007/s12559-015-9368-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-015-9368-2