Skip to main content

Simple and Effective Transfer Learning for Neuro-Symbolic Integration

  • Conference paper
  • First Online:
Neural-Symbolic Learning and Reasoning (NeSy 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14979))

Included in the following conference series:

  • 694 Accesses

Abstract

Deep Learning (DL) techniques have achieved remarkable successes in recent years. However, their ability to generalize and execute reasoning tasks remains a challenge. A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning. Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task. These methods exhibit superior generalization capacity compared to fully neural architectures. However, they suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima. This paper proposes a simple yet effective method to ameliorate these problems. The key idea involves pretraining a neural model on the downstream task. Then, a NeSy model is trained on the same task via transfer learning, where the weights of the perceptual part are injected from the pretrained network. The key observation of our work is that the neural network fails to generalize only at the level of the symbolic part while being perfectly capable of learning the mapping from perceptions to symbols. We have tested our training strategy on various SOTA NeSy methods and datasets, demonstrating consistent improvements in the aforementioned problems.

A. Daniele and T. Campari—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Here we refer to NeSy systems in the specific context where the symbolic reasoner is employed to infer new facts from the symbolic knowledge. This excludes methods like LTN where the knowledge is merely used to constrain the outputs of the neural network. It should be noted that not all NeSy systems operate in this manner.

  2. 2.

    Note that ILR is not considered since it is propositional. While, in theory, it can be extended to first-order logic through propositionalization, such a change goes beyond the scope of this work.

References

  1. Aspis, Y., Broda, K., Lobo, J., Russo, A.: Embed2Sym - scalable neuro-symbolic reasoning via clustered embeddings. In: International Conference on Principles of Knowledge Representation and Reasoning (2022)

    Google Scholar 

  2. Badreddine, S., d’Avila Garcez, A., Serafini, L., Spranger, M.: Logic tensor networks. Artif. Intell. (2022)

    Google Scholar 

  3. Barbiero, P., et al.: Interpretable neural-symbolic concept reasoning. In: Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J. (eds.) ICML 2023. Proceedings of Machine Learning Research, vol. 202, pp. 1801–1825. PMLR (2023)

    Google Scholar 

  4. Besold, T.R., et al.: Neural-symbolic learning and reasoning: a survey and interpretation. In: Neuro-Symbolic Artificial Intelligence: The State of the Art (2021)

    Google Scholar 

  5. Brewka, G., Eiter, T., Truszczyński, M.: Answer set programming at a glance. ACM Commun.(2011)

    Google Scholar 

  6. Bruynooghe, M., et al.: Problog technology for inference in a probabilistic first order logic (2010)

    Google Scholar 

  7. Cohen, G., Afshar, S., Tapson, J., Van Schaik, A.: Emnist: extending mnist to handwritten letters (2017)

    Google Scholar 

  8. Daniele, A., Campari, T., Malhotra, S., Serafini, L.: Deep symbolic learning: discovering symbols and rules from perceptions. In: IJCAI 2023, pp. 3597–3605. ijcai.org (2023)

    Google Scholar 

  9. Daniele, A., van Krieken, E., Serafini, L., van Harmelen, F.: Refining neural network predictions using background knowledge. Mach. Learn. 1–39 (2023)

    Google Scholar 

  10. Daniele, A., Serafini, L.: Knowledge enhanced neural networks. In: Pacific Rim International Conference on Artificial Intelligence (2019)

    Google Scholar 

  11. Darwiche, A.: SDD: a new canonical representation of propositional knowledge bases. In: IJCAI (2011)

    Google Scholar 

  12. Defazio, A., Jelassi, S.: Adaptivity without compromise: a momentumized, adaptive, dual averaged gradient method for stochastic optimization. JMLR (2022)

    Google Scholar 

  13. Diligenti, M., Gori, M., Saccà, C.: Semantic-based regularization for learning and inference. Artif. Intell. 244, 143–165 (2017)

    Article  MathSciNet  Google Scholar 

  14. Feng, Z., Xu, C., Tao, D.: Self-supervised representation learning by rotation feature decoupling. In: CVPR (2019)

    Google Scholar 

  15. Giunchiglia, E., Stoian, M.C., Khan, S., Cuzzolin, F., Lukasiewicz, T.: Road-r: the autonomous driving dataset with logical requirements. Mach. Learn. (2023)

    Google Scholar 

  16. Goodfellow, I., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  17. Have, C.T.: Stochastic definite clause grammars. In: Proceedings of the International Conference RANLP-2009 (2009)

    Google Scholar 

  18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  19. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Maga. (2012)

    Google Scholar 

  20. Liévin, V., Hother, C.E., Motzfeldt, A.G., Winther, O.: Can large language models reason about medical questions? Patterns 5(3), 100943 (2024)

    Article  Google Scholar 

  21. Liu, A., Xu, H., Van den Broeck, G., Liang, Y.: Out-of-distribution generalization by neural-symbolic joint training. In: AAAI (2023)

    Google Scholar 

  22. Manhaeve, R., Dumancic, S., Kimmig, A., Demeester, T., De Raedt, L.: Deepproblog: neural probabilistic logic programming. In: NeurIPS (2018)

    Google Scholar 

  23. Marconato, E., Teso, S., Passerini, A.: Neuro-symbolic reasoning shortcuts: mitigation strategies and their limitations. In: d’Avila Garcez, A.S., Besold, T.R., Gori, M., JimĂ©nez-Ruiz, E. (eds.) International Workshop on Neural-Symbolic Learning and Reasoning 2023. CEUR Workshop Proceedings, vol. 3432, pp. 162–166. CEUR-WS.org (2023)

    Google Scholar 

  24. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5

    Chapter  Google Scholar 

  25. Raedt, L.D., Dumancic, S., Manhaeve, R., Marra, G.: From statistical relational to neuro-symbolic artificial intelligence. In: Bessiere, C. (ed.) IJCAI 2020, pp. 4943–4950. ijcai.org (2020)

    Google Scholar 

  26. Sarker, M.K., Zhou, L., Eberhart, A., Hitzler, P.: Neuro-symbolic artificial intelligence. AI Commun. 34, 197–209 (2021)

    Article  MathSciNet  Google Scholar 

  27. Topan, S., Rolnick, D., Si, X.: Techniques for symbol grounding with satnet. In: NeurIPS (2021)

    Google Scholar 

  28. Winters, T., Marra, G., Manhaeve, R., De Raedt, L.: Deepstochlog: neural stochastic logic programming. In: AAAI (2022)

    Google Scholar 

  29. Xu, J., Zhang, Z., Friedman, T., Liang, Y., den Broeck, G.V.: A semantic loss function for deep learning with symbolic knowledge. In: ICML (2018)

    Google Scholar 

  30. Yang, Z., Ishay, A., Lee, J.: Neurasp: embracing neural networks into answer set programming. In: IJCAI (2020)

    Google Scholar 

  31. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. CoRR arxiv:1708.02709 (2017)

  32. Zhao, Z.Q., Zheng, P., Xu, S.T., Wu, X.: Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Syst. 30, 3212–3232 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

TC and LS were supported by the PNRR project Future AI Research (FAIR - PE00000013), under the NRRP MUR program funded by the NextGenerationEU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro Daniele .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Daniele, A., Campari, T., Malhotra, S., Serafini, L. (2024). Simple and Effective Transfer Learning for Neuro-Symbolic Integration. In: Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B. (eds) Neural-Symbolic Learning and Reasoning. NeSy 2024. Lecture Notes in Computer Science(), vol 14979. Springer, Cham. https://doi.org/10.1007/978-3-031-71167-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-71167-1_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-71166-4

  • Online ISBN: 978-3-031-71167-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics