Skip to main content

Bayesian Markov Logic Networks

Bayesian Inference for Statistical Relational Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11298))

Abstract

One of the most important foundational challenge of Statistical relational learning is the development of a uniform framework in which learning and logical reasoning are seamlessly integrated. State of the art approaches propose to modify well known machine learning methods based on parameter optimization (e.g., neural networks and graphical models) in order to take into account structural knowledge expressed by logical constraints. In this paper, we follow an alternative direction, considering the Bayesian approach to machine learning. In particular, given a partial knowledge in hybrid domains (i.e., domains that contains relational structure and continuous features) as a set \(\mathcal {T}\) of axioms and a stochastic (in)dependence hypothesis \(\mathscr {F}\) encoded in a first order language \(\mathcal {L}\), we propose to model it by a probability distribution function (PDF) \(p(\mathbf {x}\mid \mathcal {T},\mathscr {F})\) over the \(\mathcal {L}\)-interpretations \(\mathbf {x}\). The stochastic (in)dependence \(\mathscr {F}\) is represented as a Bayesian Markov Logic Network w.r.t. a parametric undirected graph, interpreted as the PDF. We propose to approximate \(P(\mathbf {x}\mid \mathcal {T},\mathscr {F})\) by variational inference and show that such approximation is possible if and only if \(\mathscr {F}\) satisfies a property called orthogonality. This property can be achieved also by extending \(\mathcal {L}\), and adjusting \(\mathcal {T}\) and \(\mathscr {F}\).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    A numeric term is a term that evaluates to a (real) number.

  2. 2.

    Familiarity with probability theory as presented in e.g., [12, Sects. 1.1 and 1.2] is assumed.

  3. 3.

    The term weight function is used to highlight the analogy with (H)MLNs, in which \(w_{\imath \jmath }(\mathbf {x},\varvec{\omega }_\imath )\) is \(e^{w_\imath s_\imath (\varphi _{\imath \jmath }(\mathbf {x}),\varvec{\tau }_{\imath \jmath }(\mathbf {x}))}\), where \(w_\imath \) is a constant, called weight, and \(s_\imath \) is a real function.

  4. 4.

    Note that we get this equality also by multiplying (7) by \(p(\mathcal {T}\mid \varvec{\omega })\cdot p(\varvec{\omega }\mid \mathscr {F})\).

References

  1. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, New York (2006)

    MATH  Google Scholar 

  2. Bishop, C.M.: A new framework for machine learning. In: Zurada, J.M., Yen, G.G., Wang, J. (eds.) WCCI 2008. LNCS, vol. 5050, pp. 1–24. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-68860-0_1

    Chapter  Google Scholar 

  3. Bishop, C.M., Spiegelhalter, D., Winn, J.: VIBES: a variational inference engine for Bayesian networks. In: Becker, S., Thrun, S., Obermayer, K., (eds.) Advances in Neural Information Processing Systems 15, pp. 777–784. MIT Press (2002). [Neural Information Processing Systems, NIPS 2002, 9–14 December 2002, Vancouver, British Columbia, Canada]

    Google Scholar 

  4. Clifford, P.: Markov random fields in statistics. In: Grimmett, G., Welsh, D. (eds.) Disorder in Physical Systems: A Volume in Honour of John M. Hammersley, pp. 19–32. Oxford University Press, Oxford (1990)

    Google Scholar 

  5. Domingos, P., Lowd, D.: Markov Logic: An Interface Layer for Artificial Intelligence. Morgan and Claypool Publishers, San Rafael/California (2009)

    MATH  Google Scholar 

  6. Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., Saul, L.K.: An introduction to variational methods for graphical models. Mach. Learn. 37(2), 183–233 (1999)

    Article  Google Scholar 

  7. Kindermann, R., Laurie Snell, J.: Markov Random Fields and Their Applications. AMS, Providence (1980)

    Book  Google Scholar 

  8. Kok, S., et al.: The alchemy system for statistical relational AI. Technical report, Department of Computer Science and Engineering, University of Washington (2007)

    Google Scholar 

  9. Ranganath, R., Gerrish, S., Blei, D.M.: Black box variational inference. In: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, 22–25 April 2014, Volume 33 of JMLR Proceedings, pp. 814–822. JMLR.org (2014)

    Google Scholar 

  10. Richardson, M., Domingos, P.M.: Markov logic networks. Mach. Learn. 62(1–2), 107–136 (2006)

    Article  Google Scholar 

  11. Russell, S.J., Norvig, P.: Artificial Intelligence - A Modern Approach. Pearson Education, London (2010). (3. internat. ed.)

    MATH  Google Scholar 

  12. Shao, J.: Mathematical Statistics. Springer, New York (1999). https://doi.org/10.1007/b97553. Springer texts in statistics

    Book  MATH  Google Scholar 

  13. Singla, P., Domingos, P.M.: Markov logic in infinite domains. CoRR, abs/1206.5292 (2012)

    Google Scholar 

  14. Wainwright, M.J., Jordan, M.I.: Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn. 1(1–2), 1–305 (2008)

    Article  Google Scholar 

  15. Wang, J., Domingos, P.M.: Hybrid Markov logic networks. In: Fox, D., Gomes, C.P. (eds.) Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, 13–17 July 2008, pp. 1106–1111. AAAI Press (2008)

    Google Scholar 

  16. Wingate, D., Weber, T.: Automated variational inference in probabilistic programming. CoRR, abs/1301.1299 (2013)

    Google Scholar 

  17. Winn, J.M., Bishop, C.M.: Variational message passing. J. Mach. Learn. Res. 6, 661–694 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Radim Nedbal or Luciano Serafini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nedbal, R., Serafini, L. (2018). Bayesian Markov Logic Networks. In: Ghidini, C., Magnini, B., Passerini, A., Traverso, P. (eds) AI*IA 2018 – Advances in Artificial Intelligence. AI*IA 2018. Lecture Notes in Computer Science(), vol 11298. Springer, Cham. https://doi.org/10.1007/978-3-030-03840-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03840-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03839-7

  • Online ISBN: 978-3-030-03840-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics