Skip to main content

Towards Effective Deep Learning for Constraint Satisfaction Problems

  • Conference paper
  • First Online:
Principles and Practice of Constraint Programming (CP 2018)

Abstract

Many attempts have been made to apply machine learning techniques to constraint satisfaction problems (CSPs). However, none of them have made use of the recent advances in deep learning. In this paper, we apply deep learning to predict the satisfiabilities of CSPs. To the best of our knowledge, this is the first effective application of deep learning to CSPs that yields \({>}99.99\%\) prediction accuracy on random Boolean binary CSPs whose constraint tightnesses or constraint densities do not determine their satisfiabilities. We use a deep convolutional neural network on a matrix representation of CSPs. Since it is NP-hard to solve CSPs, labeled data required for training are in general costly to produce and are thus scarce. We address this issue using the asymptotic behavior of generalized Model A, a new random CSP generation model, along with domain adaptation and data augmentation techniques for CSPs. We demonstrate the effectiveness of our deep learning techniques using experiments on random Boolean binary CSPs. While these CSPs are known to be in P, we use them for a proof of concept.

The research at the University of Southern California (USC) was supported by National Science Foundation (NSF) under grant numbers 1724392, 1409987, and 1319966.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Another related work [9] using a different approach was only publicly available after this paper was accepted, before which we had no access to it. Nevertheless, it only demonstrated low training and test accuracies in the experiments when the number of variables in a CSP is non-trivial (\({\ge }5\)) and we do not consider it effective (yet).

References

  1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems. software (2015). https://www.tensorflow.org/

  2. Achlioptas, D., Molloy, M.S.O., Kirousis, L.M., Stamatiou, Y.C., Kranakis, E., Krizanc, D.: Random constraint satisfaction: a more accurate picture. Constraints 6(4), 329–344 (2001). https://doi.org/10.1023/A:1011402324562

    Article  MathSciNet  MATH  Google Scholar 

  3. Amadini, R., Gabbrielli, M., Mauro, J.: An empirical evaluation of portfolios approaches for solving CSPs. In: The International Conference on Integration of Artificial Intelligence and Operations Research Techniques in Constraint Programming, pp. 316–324 (2013). https://doi.org/10.1007/978-3-642-38171-3_21

    Chapter  Google Scholar 

  4. Amadini, R., Gabbrielli, M., Mauro, J.: An enhanced features extractor for a portfolio of constraint solvers. In: The Annual ACM Symposium on Applied Computing, pp. 1357–1359 (2014). https://doi.org/10.1145/2554850.2555114

  5. Arbelaez, A., Hamadi, Y., Sebag, M.: Continuous search in constraint programming. In: The IEEE International Conference on Tools with Artificial Intelligence, pp. 53–60 (2010). https://doi.org/10.1109/ICTAI.2010.17

  6. Bourlard, H.A., Morgan, N.: Connectionist Speech Recognition. Springer, New York (1994). https://doi.org/10.1007/978-1-4615-3210-1

    Book  Google Scholar 

  7. Chollet, F., et al.: Keras (2015). https://keras.io

  8. Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 3642–3649 (2012). https://doi.org/10.1109/CVPR.2012.6248110

  9. Galassi, A., Lombardi, M., Mello, P., Milano, M.: Model agnostic solution of CSPs via deep learning: a preliminary study. In: The International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 254–262 (2018). https://doi.org/10.1007/978-3-319-93031-2_18

    Chapter  Google Scholar 

  10. Gent, I.P., et al.: Learning when to use lazy learning in constraint solving. In: The European Conference on Artificial Intelligence, pp. 873–878 (2010). https://doi.org/10.3233/978-1-60750-606-5-873

  11. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  12. Guerri, A., Milano, M.: Learning techniques for automatic algorithm portfolio selection. In: The European Conference on Artificial Intelligence, pp. 475–479 (2004)

    Google Scholar 

  13. Hahnloser, R.H.R., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000). https://doi.org/10.1038/35016072

    Article  Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: The IEEE International Conference on Computer Vision, pp. 1026–1034 (2015). https://doi.org/10.1109/ICCV.2015.123

  15. Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC - instance-specific algorithm configuration. In: The European Conference on Artificial Intelligence, pp. 751–756 (2010). https://doi.org/10.3233/978-1-60750-606-5-751

  16. Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. In: Data Mining and Constraint Programming: Foundations of a Cross-Disciplinary Approach, pp. 149–190 (2016). https://doi.org/10.1007/978-3-319-50137-6_7

    Chapter  Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: The Neural Information Processing Systems Conference, pp. 1097–1105 (2012). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  18. LeCun, Y.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989). https://doi.org/10.1162/neco.1989.1.4.541

    Article  Google Scholar 

  19. Loreggia, A., Malitsky, Y., Samulowitz, H., Saraswat, V.: Deep learning for algorithm portfolios. In: The AAAI Conference on Artificial Intelligence, pp. 1280–1286 (2016)

    Google Scholar 

  20. Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: The International Conference on Machine Learning, pp. 2014–2023 (2016)

    Google Scholar 

  21. O’Mahony, E., Hebrard, E., Holland, A., Nugent, C., O’Sullivan, B.: Using case-based reasoning in an algorithm portfolio for constraint solving. In: The Irish Conference on Artificial Intelligence and Cognitive Science (2008)

    Google Scholar 

  22. Prud’homme, C., Fages, J.G., Lorca, X.: Choco Documentation. TASC - LS2N CNRS UMR 6241, COSLING S.A.S. (2017). http://www.choco-solver.org

  23. Pulina, L., Tacchella, A.: A multi-engine solver for quantified Boolean formulas. In: The International Conference on Principles and Practice of Constraint Programming, pp. 574–589 (2007). https://doi.org/10.1007/978-3-540-74970-7_41

  24. Sateesh Babu, G., Zhao, P., Li, X.L.: Deep convolutional neural network based regression approach for estimation of remaining useful life. In: The International Conference on Database Systems for Advanced Applications, pp. 214–228 (2016). https://doi.org/10.1007/978-3-319-32025-0_14

    Chapter  Google Scholar 

  25. Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., Dill, D.L.: Learning a SAT solver from single-bit supervision. arXiv:1802.03685 [cs.AI] (2018)

  26. Smith, B.M., Dyer, M.E.: Locating the phase transition in binary constraint satisfaction problems. Artif. Intell. 81(1), 155–181 (1996). https://doi.org/10.1016/0004-3702(95)00052-6

    Article  MathSciNet  Google Scholar 

  27. Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606 (2008). https://doi.org/10.1613/jair.2490

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, H., Koenig, S., Kumar, T.K.S. (2018). Towards Effective Deep Learning for Constraint Satisfaction Problems. In: Hooker, J. (eds) Principles and Practice of Constraint Programming. CP 2018. Lecture Notes in Computer Science(), vol 11008. Springer, Cham. https://doi.org/10.1007/978-3-319-98334-9_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98334-9_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98333-2

  • Online ISBN: 978-3-319-98334-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics