Skip to main content

Experiments with LLMs for Converting Language to Logic

  • Conference paper
  • First Online:
Neural-Symbolic Learning and Reasoning (NeSy 2024)

Abstract

While the meaning of “semantic parsing” covers a wide spectrum, we consider converting English to first order predicate logic (FOL) with the help of large language models (LLMs). The paper focuses on experiments with different approaches of using an LLM for semantic parsing to FOL: from standalone zero-shot and multishot scenarios to the use as a specialized component in several stages of the semantic parser pipeline. The goal of the experiments is to determine promising approaches for including LLM components into a question answering pipeline built around a logical reasoner with extensions for commonsense reasoning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://github.com/tammet/nlpsolver.

  2. 2.

    https://github.com/tammet/nlpsolver/tree/main/gpt.

References

  1. Daniel, K.: Thinking, Fast and Slow. Macmillan, New York (2011)

    Google Scholar 

  2. De Marneffe, M.C., Manning, C.D., Nivre, J., Zeman, D.: Universal dependencies. Comput. Linguist. 47(2), 255–308 (2021)

    Google Scholar 

  3. Dziri, N., et al.: Faith and fate: limits of transformers on compositionality. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  4. Gupta, G., et al.: Building intelligent systems by combining machine learning and automated commonsense reasoning. In: Proceedings of the AAAI Symposium Series, vol. 2, pp. 272–276 (2023)

    Google Scholar 

  5. Järv, P., Tammet, T., Verrev, M., Draheim., D.: Knowledge integration for commonsense reasoning with default logic. In: Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KEOD, pp. 148–155. INSTICC, SciTePress (2022)

    Google Scholar 

  6. Lehmann, J., Gattogi, P., Bhandiwad, D., Ferré, S., Vahdati, S.: Language models as controlled natural language semantic parsers for knowledge graph question answering. In: ECAI 2023, pp. 1348–1356. IOS Press (2023)

    Google Scholar 

  7. Lyu, Q., et al.: Faithful chain-of-thought reasoning. In: Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 305–329 (2023)

    Google Scholar 

  8. McCoy, T., Pavlick, E., Linzen, T.: Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3428–3448. Association for Computational Linguistics (2019)

    Google Scholar 

  9. McGinness, L., Baumgartner, P.: Automated theorem provers help improve large language model reasoning. EPiC Ser. Comput. 100, 51–69 (2024)

    Article  MathSciNet  Google Scholar 

  10. Olausson, T., et al.: Linc: a neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5153–5176 (2023)

    Google Scholar 

  11. Qi, P., Zhang, Y., Zhang, Y., Bolton, J., Manning, C.D.: Stanza: a Python natural language processing toolkit for many human languages. CoRR abs/2003.07082 (2020). https://arxiv.org/abs/2003.07082

  12. Qiao, S., et al.: Reasoning with language model prompting: a survey. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5368–5393 (2023)

    Google Scholar 

  13. Reiter, R.: A logic for default reasoning. Artif. Intell. 13(1–2), 81–132 (1980)

    Article  MathSciNet  Google Scholar 

  14. Tafjord, O., Mishra, B.D., Clark, P.: Proofwriter: generating implications, proofs, and abductive statements over natural language. arXiv preprint arXiv:2012.13048 (2020)

  15. Tammet, T., Draheim, D., Järv, P.: GK: implementing full first order default logic for commonsense reasoning (system description). In: Blanchette, J., Kovács, L., Pattinson, D. (eds.) IJCAR 2022. LNCS, vol. 13385, pp. 300–309. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-10769-6_18

    Chapter  Google Scholar 

  16. Tammet, T., Draheim, D., Järv, P.: Confidences for commonsense reasoning. In: Platzer, A., Sutcliffe, G. (eds.) CADE 2021. LNCS (LNAI), vol. 12699, pp. 507–524. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-79876-5_29

    Chapter  Google Scholar 

  17. Tammet, T., Järv, P., Verrev, M., Draheim, D.: An experimental pipeline for automated reasoning in natural language (short paper). In: Platzer, A., Sutcliffe, G. (eds.) CADE 2021. LNCS, vol. 12699, pp. 509–521. Springer, Cham (2023). https://doi.org/10.1007/978-3-030-79876-5_29

    Chapter  Google Scholar 

  18. Tammet, T., Sutcliffe, G.: Combining json-ld with first order logic. In: 15th International Conference on Semantic Computing (ICSC), pp. 256–261. IEEE (2021)

    Google Scholar 

  19. Trinh, T.H., Wu, Y., Le, Q.V., He, H., Luong, T.: Solving Olympiad geometry without human demonstrations. Nature 625(7995), 476–482 (2024)

    Article  Google Scholar 

  20. Valmeekam, K., Marquez, M., Olmo, A., Sreedharan, S., Kambhampati, S.: Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanel Tammet .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tammet, T., Järv, P., Verrev, M., Draheim, D. (2024). Experiments with LLMs for Converting Language to Logic. In: Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B. (eds) Neural-Symbolic Learning and Reasoning. NeSy 2024. Lecture Notes in Computer Science(), vol 14980. Springer, Cham. https://doi.org/10.1007/978-3-031-71170-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-71170-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-71169-5

  • Online ISBN: 978-3-031-71170-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics