Skip to main content

Preventing Groundings and Handling Evidence in the Lifted Junction Tree Algorithm

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10505))

Abstract

For inference in probabilistic formalisms with first-order constructs, lifted variable elimination (LVE) is one of the standard approaches for single queries. To handle multiple queries efficiently, the lifted junction tree algorithm (LJT) uses a specific representation of a first-order knowledge base and LVE in its computations. Unfortunately, LJT induces unnecessary groundings in cases where the standard LVE algorithm, GC-FOVE, has a fully lifted run. Additionally, LJT does not handle evidence explicitly. We extend LJT (i) to identify and prevent unnecessary groundings and (ii) to effectively handle evidence in a lifted manner. Given multiple queries, e.g., in machine learning applications, our extension computes answers faster than LJT and GC-FOVE.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ahmadi, B., Kersting, K., Mladenov, M., Natarajan, S.: Exploiting symmetries for scaling loopy belief propagation and relational training. In: Machine Learning, vol. 92, pp. 91–132. Kluwer Academic Publishers, Hingham (2013)

    Google Scholar 

  2. Bellodi, E., Lamma, E., Riguzzi, F., Santos Costa, V., Zese, R.: Lifted variable elimination for probabilistic logic programming. Theory Pract. Logic Program. 14(4–5), 681–695 (2014). Cambridge University Press, Cambridge

    Article  MATH  Google Scholar 

  3. Braun, T., Möller, R.: Lifted junction tree algorithm. In: Friedrich, G., Helmert, M., Wotawa, F. (eds.) KI 2016. LNCS, vol. 9904, pp. 30–42. Springer, Cham (2016). doi:10.1007/978-3-319-46073-4_3

    Chapter  Google Scholar 

  4. Van den Broeck, G.: Lifted Inference and Learning in Statistical Relational Models. Ph.D. Thesis, KU Leuven (2013)

    Google Scholar 

  5. Choi, J., Amir, E., Hill, D.J.: Lifted inference for relational continuous models. In: Proceedings of the 26th Conference on Artificial Intelligence. The AAAI Press, Menlo Park (2012)

    Google Scholar 

  6. Darwiche, A.: Recursive conditioning. In: Artificial Intelligence, vol. 2, pp. 4–41. Elsevier Science Publishers, Essex (2001)

    Google Scholar 

  7. Darwiche, A.: Modeling and Reasoning with Bayesian Networks. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  8. Das, M., Wu, Y., Khot, T., Kersting, K., Natarajan, S.: Scaling lifted probabilistic inference and learning via graph databases. In: Proceedings of the SIAM International Conference on Data Mining, pp. 738–746. Society for Industrial and Applied Mathematics, Philadelphia (2016)

    Google Scholar 

  9. Gogate, V., Domingos, P.: Exploiting logical structure in lifted probabilistic inference. In: Working Note of the Workshop on Statistical Relational Artificial Intelligence at the 24th Conference on Artificial Intelligence. The AAAI Press, Menlo Park (2010)

    Google Scholar 

  10. Gogate, V., Domingos, P.: Probabilistic theorem proving. In: Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence, pp. 256–265. AUAI Press, Arlington (2011)

    Google Scholar 

  11. Jensen, F.V., Lauritzen, S.L., Oleson, K.G.: Bayesian updating in recursive graphical models by local computations. In: Computational Statistics Quarterly, vol. 4, pp. 269–282. Physica-Verlag, Vienna (1990)

    Google Scholar 

  12. Lauritzen, S.L., Spiegelhalter, D.J.: Local computations with probabilities on graphical structures and their application to expert systems. J. Roy. Stat. Soc.: Ser. B (Stat. Methodol.) 50, 157–224 (1988). Wiley-Blackwell, Oxford

    MathSciNet  MATH  Google Scholar 

  13. Milch, B., Zettlemoyer, L.S., Kersting, K., Haimes, M., Pack Kaelbling, L.: Lifted probabilistic inference with counting formulas. In: Proceedings of the 23rd Conference on Artificial Intelligence, pp. 1062–1068. The AAAI Press, Menlo Park (2008)

    Google Scholar 

  14. Poole, D.: First-order probabilistic inference. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence, pp. 985–991. Morgan Kaufman Publishers Inc., San Francisco (2003)

    Google Scholar 

  15. de Salvo Braz, R.: Lifted First-Order Probabilistic Inference. Ph.D. Thesis, University of Illinois at Urbana-Champaign (2007)

    Google Scholar 

  16. Shafer, G.R., Shenoy, P.P.: Probability propagation. Ann. Math. Artif. Intell. 2, 327–351 (1989). Springer, Heidelberg

    Article  MathSciNet  MATH  Google Scholar 

  17. Singla, P., Domingos, P.: Lifted first-order belief propagation. In: Proceedings of the 23rd Conference on Artificial Intelligence, pp. 1094–1099. The AAAI Press, Menlo Park (2008)

    Google Scholar 

  18. Taghipour, N.: Lifted Probabilistic Inference by Variable Elimination. Ph.D. Thesis, KU Leuven (2013)

    Google Scholar 

  19. Taghipour, N., Davis, J., Blockeel, H.: First-order decomposition trees. In: Advances in Neural Information Processing Systems 26, pp. 1052–1060. Curran Associates, Red Hook (2013)

    Google Scholar 

  20. Vlasselaer, J., Meert, W., van den Broeck, G., de Raedt, L.: Exploiting local and repeated structure in dynamic baysian networks. In: Artificial Intelligence, vol. 232, pp. 43–53. Elsevier, Amsterdam (2016)

    Google Scholar 

  21. Zhang, N.L., Poole, D.: A simple approach to bayesian network computations. In: Proceedings of the 10th Canadian Conference on Artificial Intelligence, pp. 171–178. Morgan Kaufman Publishers, San Francisco (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanya Braun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Braun, T., Möller, R. (2017). Preventing Groundings and Handling Evidence in the Lifted Junction Tree Algorithm. In: Kern-Isberner, G., Fürnkranz, J., Thimm, M. (eds) KI 2017: Advances in Artificial Intelligence. KI 2017. Lecture Notes in Computer Science(), vol 10505. Springer, Cham. https://doi.org/10.1007/978-3-319-67190-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67190-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67189-5

  • Online ISBN: 978-3-319-67190-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics