Abstract
While the meaning of “semantic parsing” covers a wide spectrum, we consider converting English to first order predicate logic (FOL) with the help of large language models (LLMs). The paper focuses on experiments with different approaches of using an LLM for semantic parsing to FOL: from standalone zero-shot and multishot scenarios to the use as a specialized component in several stages of the semantic parser pipeline. The goal of the experiments is to determine promising approaches for including LLM components into a question answering pipeline built around a logical reasoner with extensions for commonsense reasoning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Daniel, K.: Thinking, Fast and Slow. Macmillan, New York (2011)
De Marneffe, M.C., Manning, C.D., Nivre, J., Zeman, D.: Universal dependencies. Comput. Linguist. 47(2), 255–308 (2021)
Dziri, N., et al.: Faith and fate: limits of transformers on compositionality. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Gupta, G., et al.: Building intelligent systems by combining machine learning and automated commonsense reasoning. In: Proceedings of the AAAI Symposium Series, vol. 2, pp. 272–276 (2023)
Järv, P., Tammet, T., Verrev, M., Draheim., D.: Knowledge integration for commonsense reasoning with default logic. In: Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KEOD, pp. 148–155. INSTICC, SciTePress (2022)
Lehmann, J., Gattogi, P., Bhandiwad, D., Ferré, S., Vahdati, S.: Language models as controlled natural language semantic parsers for knowledge graph question answering. In: ECAI 2023, pp. 1348–1356. IOS Press (2023)
Lyu, Q., et al.: Faithful chain-of-thought reasoning. In: Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 305–329 (2023)
McCoy, T., Pavlick, E., Linzen, T.: Right for the wrong reasons: diagnosing syntactic heuristics in natural language inference. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3428–3448. Association for Computational Linguistics (2019)
McGinness, L., Baumgartner, P.: Automated theorem provers help improve large language model reasoning. EPiC Ser. Comput. 100, 51–69 (2024)
Olausson, T., et al.: Linc: a neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5153–5176 (2023)
Qi, P., Zhang, Y., Zhang, Y., Bolton, J., Manning, C.D.: Stanza: a Python natural language processing toolkit for many human languages. CoRR abs/2003.07082 (2020). https://arxiv.org/abs/2003.07082
Qiao, S., et al.: Reasoning with language model prompting: a survey. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5368–5393 (2023)
Reiter, R.: A logic for default reasoning. Artif. Intell. 13(1–2), 81–132 (1980)
Tafjord, O., Mishra, B.D., Clark, P.: Proofwriter: generating implications, proofs, and abductive statements over natural language. arXiv preprint arXiv:2012.13048 (2020)
Tammet, T., Draheim, D., Järv, P.: GK: implementing full first order default logic for commonsense reasoning (system description). In: Blanchette, J., Kovács, L., Pattinson, D. (eds.) IJCAR 2022. LNCS, vol. 13385, pp. 300–309. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-10769-6_18
Tammet, T., Draheim, D., Järv, P.: Confidences for commonsense reasoning. In: Platzer, A., Sutcliffe, G. (eds.) CADE 2021. LNCS (LNAI), vol. 12699, pp. 507–524. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-79876-5_29
Tammet, T., Järv, P., Verrev, M., Draheim, D.: An experimental pipeline for automated reasoning in natural language (short paper). In: Platzer, A., Sutcliffe, G. (eds.) CADE 2021. LNCS, vol. 12699, pp. 509–521. Springer, Cham (2023). https://doi.org/10.1007/978-3-030-79876-5_29
Tammet, T., Sutcliffe, G.: Combining json-ld with first order logic. In: 15th International Conference on Semantic Computing (ICSC), pp. 256–261. IEEE (2021)
Trinh, T.H., Wu, Y., Le, Q.V., He, H., Luong, T.: Solving Olympiad geometry without human demonstrations. Nature 625(7995), 476–482 (2024)
Valmeekam, K., Marquez, M., Olmo, A., Sreedharan, S., Kambhampati, S.: Planbench: an extensible benchmark for evaluating large language models on planning and reasoning about change. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tammet, T., Järv, P., Verrev, M., Draheim, D. (2024). Experiments with LLMs for Converting Language to Logic. In: Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B. (eds) Neural-Symbolic Learning and Reasoning. NeSy 2024. Lecture Notes in Computer Science(), vol 14980. Springer, Cham. https://doi.org/10.1007/978-3-031-71170-1_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-71170-1_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-71169-5
Online ISBN: 978-3-031-71170-1
eBook Packages: Computer ScienceComputer Science (R0)