Skip to main content

Stacked Structure Learning for Lifted Relational Neural Networks

  • Conference paper
  • First Online:
Inductive Logic Programming (ILP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10759))

Included in the following conference series:

  • 636 Accesses

Abstract

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Variants of this strategy are employed by many structure learning algorithms in the context of statistical relational learning, e.g. [4, 5, 8].

  2. 2.

    The space of rules is defined by two user-specified constraints: maximum rule length and maximum number of variables in a rule.

References

  1. Blockeel, H., Uwents, W.: Using neural networks for relational learning. In: ICML 2004 Workshop on Statistical Relational Learning and Its Connection to Other Fields, pp. 23–28 (2004)

    Google Scholar 

  2. Bottou, L.: Stochastic gradient descent tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 421–436. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_25

    Chapter  Google Scholar 

  3. Cohen, W.W.: TensorLog: a differentiable deductive database. arXiv preprint arXiv:1605.06523 (2016)

  4. Davis, J., Burnside, E., de Castro Dutra, I., Page, D., Costa, V.S.: An integrated approach to learning Bayesian networks of rules. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 84–95. Springer, Heidelberg (2005). https://doi.org/10.1007/11564096_13

    Chapter  Google Scholar 

  5. Dinh, Q.T., Exbrayat, M., Vrain, C.: Generative structure learning for Markov logic networks based on graph of predicates. In: IJCAI Proceedings of International Joint Conference on Artificial Intelligence, vol. 22, p. 1249 (2011)

    Google Scholar 

  6. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture (1989)

    Google Scholar 

  7. Hájek, P.: Metamathematics of Fuzzy Logic, vol. 4. Springer, Dordrecht (1998). https://doi.org/10.1007/978-94-011-5300-3

    Book  MATH  Google Scholar 

  8. Kok, S., Domingos, P.: Learning the structure of Markov logic networks. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 441–448 (2005)

    Google Scholar 

  9. Landwehr, N., Kersting, K., Raedt, L.D.: Integrating naive bayes and FOIL. J. Mach. Learn. Res. 8, 481–507 (2007)

    MATH  Google Scholar 

  10. Landwehr, N., Passerini, A., De Raedt, L., Frasconi, P.: kFOIL: learning simple relational kernels. In: AAAI 2006: Proceedings of the 21st National Conference on Artificial Intelligence, pp. 389–394. AAAI Press (2006)

    Google Scholar 

  11. Muggleton, S.H., Lin, D., Tamaddoni-Nezhad, A.: Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach. Learn. 100(1), 49–73 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Opitz, D.W., Shavlik, J.W.: Heuristically expanding knowledge-based neural networks. In: IJCAI, pp. 1360–1365 (1993)

    Google Scholar 

  13. Ralaivola, L., Swamidass, S.J., Saigo, H., Baldi, P.: Graph kernels for chemical informatics. Neural Netw. 18(8), 1093–1110 (2005)

    Article  Google Scholar 

  14. Rocktäschel, T., Riedel, S.: Learning knowledge base inference with neural theorem provers. In: NAACL Workshop on Automated Knowledge Base Construction (AKBC) (2016)

    Google Scholar 

  15. Šourek, G., Aschenbrenner, V., Železný, F., Kuželka, O.: Lifted relational neural networks. In: Proceedings of the NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (2015)

    Google Scholar 

  16. Šourek, G., Aschenbrenner, V., Železný, F., Kuželka, O.: Lifted relational neural networks. arXiv preprint (2015). http://arxiv.org/abs/1508.05128

  17. Šourek, G., Manandhar, S., Železný, F., Schockaert, S., Kuželka, O.: Learning predictive categories using lifted relational neural networks. In: Cussens, J., Russo, A. (eds.) ILP 2016. LNCS (LNAI), vol. 10326, pp. 108–119. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63342-8_9

    Chapter  Google Scholar 

Download references

Acknowledgements

GŠ, MS and FŽ acknowledge support by project no. 17-26999S granted by the Czech Science Foundation. This work was done while OK was with Cardiff University and supported by a grant from the Leverhulme Trust (RPG-2014-164). SS is supported by ERC Starting Grant 637277. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gustav Šourek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Šourek, G., Svatoš, M., Železný, F., Schockaert, S., Kuželka, O. (2018). Stacked Structure Learning for Lifted Relational Neural Networks. In: Lachiche, N., Vrain, C. (eds) Inductive Logic Programming. ILP 2017. Lecture Notes in Computer Science(), vol 10759. Springer, Cham. https://doi.org/10.1007/978-3-319-78090-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78090-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78089-4

  • Online ISBN: 978-3-319-78090-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics