Skip to main content

lazyCoP: Lazy Paramodulation Meets Neurally Guided Search

  • Conference paper
  • First Online:
Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12842))

Abstract

State-of-the-art automated theorem provers explore large search spaces with carefully-engineered routines, but most do not learn from past experience as human mathematicians can. Unfortunately, machine-learned heuristics for theorem proving are typically either fast or accurate, not both. Therefore, systems must make a tradeoff between the quality of heuristic guidance and the reduction in inference rate required to use it. We present a system (lazyCoP) based on lazy paramodulation that is completely insulated from heuristic overhead, allowing the use of even deep neural networks with no measurable reduction in inference rate. Given 10 s to find proofs in a corpus of mathematics, the system improves from 64% to 70% when trained on its own proofs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    achieved by only making inferences used in the eventual proof.

  2. 2.

    https://github.com/MichaelRawson/lazycop.

  3. 3.

    Usually this means that when adding a clause, there must be a literal with opposite sign that unifies with a leaf literal. Lazy paramodulation extends this notion to equality reasoning.

  4. 4.

    It is not known whether lazyCoP’s calculus with refinements is complete. For instance and to the best of our knowledge, Paskevich [27] leaves the compatibility of lazy paramodulation with the regularity condition an open question.

  5. 5.

    no value function is employed: it is unclear how to adapt this to asynchronous evaluation, or how useful this would be in an asynchronous context.

  6. 6.

    that is, states with more than one possible action.

  7. 7.

    Intel® Core i7-6700 CPU @ 3.40 GHz, NVIDIA® GeForce® GT 730.

References

  1. Bansal, K., Loos, S., Rabe, M., Szegedy, C., Wilcox, S.: HOList: an environment for machine learning of higher order logic theorem proving. In: International Conference on Machine Learning, pp. 454–463 (2019)

    Google Scholar 

  2. Bayerl, S., Letz, R.: SETHEO: a sequential theorem prover for first-order logic. Esprit’87-Achievements and Impacts, part 1, pp. 721–735 (1987)

    Google Scholar 

  3. Böhme, S., Nipkow, T.: Sledgehammer: judgement day. In: Giesl, J., Hähnle, R. (eds.) IJCAR 2010. LNCS (LNAI), vol. 6173, pp. 107–121. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14203-1_9

    Chapter  Google Scholar 

  4. Chvalovský, K., Jakubův, J., Suda, M., Urban, J.: ENIGMA-NG: efficient neural and gradient-boosted inference guidance for E. In: Fontaine, P. (ed.) CADE 2019. LNCS (LNAI), vol. 11716, pp. 197–215. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29436-6_12

    Chapter  Google Scholar 

  5. Färber, M., Brown, C.: Internal guidance for satallax. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 349–361. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_24

    Chapter  Google Scholar 

  6. Färber, M., Kaliszyk, C., Urban, J.: Monte-Carlo connection prover. In: Second Conference on Artificial Intelligence and Theorem Proving (2017)

    Google Scholar 

  7. Gauthier, T., Kaliszyk, C., Urban, J., Kumar, R., Norrish, M.: TacticToe: learning to prove with tactics. J. Autom. Reason. 65(2), 257–286 (2021)

    Article  MathSciNet  Google Scholar 

  8. Gleiss, B., Suda, M.: Layered clause selection for theory reasoning. In: Peltier, N., Sofronie-Stokkermans, V. (eds.) IJCAR 2020. LNCS (LNAI), vol. 12166, pp. 402–409. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51074-9_23

    Chapter  Google Scholar 

  9. Goertzel, Z.A.: Make E smart again (short paper). In: Peltier, N., Sofronie-Stokkermans, V. (eds.) IJCAR 2020. LNCS (LNAI), vol. 12167, pp. 408–415. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51054-1_26

    Chapter  Google Scholar 

  10. Grabowski, A., Kornilowicz, A., Naumowicz, A.: Mizar in a nutshell. J. Formalized Reason. 3(2), 153–245 (2010)

    MathSciNet  MATH  Google Scholar 

  11. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)

    Google Scholar 

  12. Irving, G., Szegedy, C., Alemi, A.A., Eén, N., Chollet, F., Urban, J.: DeepMath – deep sequence models for premise selection. In: Advances in Neural Information Processing Systems, pp. 2235–2243 (2016)

    Google Scholar 

  13. Kaliszyk, C., Urban, J.: FEMaLeCoP: fairly efficient machine learning connection prover. In: Davis, M., Fehnker, A., McIver, A., Voronkov, A. (eds.) LPAR 2015. LNCS, vol. 9450, pp. 88–96. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48899-7_7

    Chapter  Google Scholar 

  14. Kaliszyk, C., Urban, J., Michalewski, H., Olšák, M.: Reinforcement learning of theorem proving. In: Advances in Neural Information Processing Systems, pp. 8822–8833 (2018)

    Google Scholar 

  15. Kovács, L., Voronkov, A.: First-order theorem proving and Vampire. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 1–35. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_1

    Chapter  Google Scholar 

  16. Lederman, G., Rabe, M., Seshia, S., Lee, E.A.: Learning heuristics for quantified boolean formulas through reinforcement learning. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=BJluxREKDB

  17. Letz, R., Mayr, K., Goller, C.: Controlled integration of the cut rule into connection tableau calculi. J. Autom. Reason. 13(3), 297–337 (1994)

    Article  MathSciNet  Google Scholar 

  18. Letz, R., Stenz, G.: Model elimination and connection tableau procedures. In: Handbook of Automated Reasoning, vol. 2. MIT Press (2001)

    Google Scholar 

  19. Loos, S., Irving, G., Szegedy, C., Kaliszyk, C.: Deep network guided proof search. In: LPAR-21. 21st International Conference on Logic for Programming, Artificial Intelligence and Reasoning, pp. 85–105 (2017)

    Google Scholar 

  20. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. In: 5th International Conference on Learning Representations (2017)

    Google Scholar 

  21. McCune, W., Wos, L.: Otter – the CADE-13 competition incarnations. J. Autom. Reason. 18(2), 211–220 (1997)

    Article  Google Scholar 

  22. Neuwenhuis, R., Rubio, A.: Paramodulation-based theorem proving. In: Handbook of Automated Reasoning, vol. 1. MIT Press (2001)

    Google Scholar 

  23. Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming with CUDA. ACM Queue 6(2), 40–53 (2008)

    Article  Google Scholar 

  24. Orseau, L., Lelis, L., Lattimore, T., Weber, T.: Single-agent policy tree search with guarantees. In: Advances in Neural Information Processing Systems, pp. 3201–3211 (2018)

    Google Scholar 

  25. Otten, J.: leanCoP 2.0 and ileanCoP 1.2: high performance lean theorem proving in classical and intuitionistic logic (system descriptions). In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 283–291. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-71070-7_23

    Chapter  Google Scholar 

  26. Otten, J.: Restricting backtracking in connection calculi. AI Commun. 23(2–3), 159–182 (2010)

    Article  MathSciNet  Google Scholar 

  27. Paskevich, A.: Connection tableaux with lazy paramodulation. J. Autom. Reason. 40(2–3), 179–194 (2008)

    Article  MathSciNet  Google Scholar 

  28. Rawson, M., Reger, G.: A neurally-guided, parallel theorem prover. In: Herzig, A., Popescu, A. (eds.) FroCoS 2019. LNCS (LNAI), vol. 11715, pp. 40–56. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29007-8_3

    Chapter  Google Scholar 

  29. Rawson, M., Reger, G.: Old or heavy? Decaying gracefully with age/weight shapes. In: Fontaine, P. (ed.) CADE 2019. LNCS (LNAI), vol. 11716, pp. 462–476. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29436-6_27

    Chapter  Google Scholar 

  30. Rawson, M., Reger, G.: Directed graph networks for logical reasoning. In: Practical Aspects of Automated Reasoning (2020)

    Google Scholar 

  31. Rawson, M., Reger, G.: lazyCoP 0.1. EasyChair Preprint no. 3926 (EasyChair 2020) (2020)

    Google Scholar 

  32. Riazanov, A., Voronkov, A.: Limited resource strategy in resolution theorem proving. J. Symb. Comput. 36(1–2), 101–115 (2003)

    Article  MathSciNet  Google Scholar 

  33. Schadd, M.P.D., Winands, M.H.M., van den Herik, H.J., Chaslot, G.M.J.-B., Uiterwijk, J.W.H.M.: Single-player Monte-Carlo tree search. In: van den Herik, H.J., Xu, X., Ma, Z., Winands, M.H.M. (eds.) CG 2008. LNCS, vol. 5131, pp. 1–12. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87608-3_1

    Chapter  Google Scholar 

  34. Schulz, S.: E - a brainiac theorem prover. AI Commun. 15(2, 3), 111–126 (2002)

    MATH  Google Scholar 

  35. Schulz, S., Möhrmann, M.: Performance of clause selection heuristics for saturation-based theorem proving. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 330–345. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_23

    Chapter  Google Scholar 

  36. Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., Dill, D.L.: Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685 (2018)

  37. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Article  Google Scholar 

  38. Sutcliffe, G.: The TPTP problem library and associated infrastructure. J. Autom. Reason. 43(4), 337 (2009). https://doi.org/10.1007/s10817-009-9143-8

    Article  MATH  Google Scholar 

  39. Sutcliffe, G.: The CADE ATP system competition – CASC. AI Mag. 37(2), 99–101 (2016)

    MATH  Google Scholar 

  40. Urban, J.: MPTP 0.2: design, implementation, and initial experiments. J. Autom. Reason. 37(1–2), 21–43 (2006)

    MATH  Google Scholar 

  41. Urban, J., Vyskočil, J., Štěpánek, P.: MaLeCoP: machine learning connection prover. In: Brünnler, K., Metcalfe, G. (eds.) TABLEAUX 2011. LNCS (LNAI), vol. 6793, pp. 263–277. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22119-4_21

    Chapter  Google Scholar 

  42. Wang, M., Tang, Y., Wang, J., Deng, J.: Premise selection for theorem proving by deep graph embedding. In: Advances in Neural Information Processing Systems, pp. 2786–2796 (2017)

    Google Scholar 

  43. Zombori, Z., Csiszárik, A., Michalewski, H., Kaliszyk, C., Urban, J.: Towards finding longer proofs. arXiv preprint arXiv:1905.13100 (2019)

  44. Zombori, Z., Urban, J., Brown, C.E.: Prolog technology reinforcement learning prover. In: Peltier, N., Sofronie-Stokkermans, V. (eds.) IJCAR 2020. LNCS (LNAI), vol. 12167, pp. 489–507. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51054-1_33

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Rawson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rawson, M., Reger, G. (2021). lazyCoP: Lazy Paramodulation Meets Neurally Guided Search. In: Das, A., Negri, S. (eds) Automated Reasoning with Analytic Tableaux and Related Methods. TABLEAUX 2021. Lecture Notes in Computer Science(), vol 12842. Springer, Cham. https://doi.org/10.1007/978-3-030-86059-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86059-2_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86058-5

  • Online ISBN: 978-3-030-86059-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics