Abstract
The problems of generating candidate hypotheses and inferring the best hypothesis out of this set are typically seen as two distinct aspects of the more general problem of non-demonstrative inference or abduction. In the context of Bayesian networks the latter problem (computing most probable explanations) is well understood, while the former problem is typically left as an exercise to the modeler. In other words, the candidate hypotheses are pre-selected and hard-coded. In reality, however, non-demonstrative inference is rather an interactive process, switching between hypothesis generation, inference to the best explanation, evidence gathering and deciding which information is relevant. In this paper we will discuss a possible computational formalization of finding an explanation which is both probable and as informative as possible, thereby combining (at least some aspects of) both the ‘hypotheses-generating’ and ‘inference’ steps of the abduction process. The computational complexity of this formal problem, denoted Most Inforbable Explanation, is then established and some problem parameters are investigated in order to get a deeper understanding of what makes this problem intractable in general, and under which circumstances the problem becomes tractable.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Barwise, J.: Information and impossibilities. The Notre Dame Journal of Formal Logic 38(4), 488–515 (1997)
Bodlaender, H.L., van den Eijkhof, F., van der Gaag, L.C.: On the complexity of the MPA problem in probabilistic networks. In: Proceedings of the 15th European Conference on Artificial Intelligence, pp. 675–679 (2002)
Charniak, E., Shimony, S.E.: Cost-based abduction and MAP explanation. Artificial Intelligence 66(2), 345–374 (1994)
Darwiche, A.: Modeling and Reasoning with Bayesian Networks. Cambridge University Press (2009)
De Campos, C.P.: New complexity results for MAP in Bayesian networks. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, pp. 2100–2106 (2011)
de Campos, L., Gamez, J., Moral, S.: Partial abductive inference in Bayesian belief networks using a genetic algorithm. Pattern Recognition Letters 20(11-13), 1211–1217 (1999)
de Campos, L., Gamez, J., Moral, S.: Partial abductive inference in Bayesian belief networks by simulated annealing. International Journal of Approximate Reasoning 27(3), 263–283 (2001)
de Campos, L., Gámez, J., Moral, S.: Simplifying explanations in Bayesian belief networks. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 9(4), 461–489 (2001)
Downey, R.G., Fellows, M.R.: Parameterized complexity. Springer, Berlin (1999)
Garey, M.R., Johnson, D.S.: Computers and Intractability. In: A Guide to the Theory of NP-Completeness, W. H. Freeman and Co., San Francisco (1979)
Glass, D.H.: Inference to the best explanation: a comparison of approaches. In: Second Symposium on Computing and Philosophy (2009)
Jensen, F.V., Nielsen, T.D.: Bayesian Networks and Decision Graphs, 2nd edn. Springer, New York (2007)
Kwisthout, J.: The Computational Complexity of Probabilistic Networks. PhD thesis, Faculty of Science, Utrecht University, The Netherlands (2009)
Kwisthout, J.: Most probable explanations in Bayesian networks: Complexity and tractability. International Journal of Approximate Reasoning 52(9), 1452–1469 (2011)
Kwisthout, J.: Structure approximation of most probable explanations in Bayesian networks. In: Proceedings of the 24th Benelux Conference on Artificial Intelligence, BNAIC 2012 (2012)
Kwisthout, J.: The computational complexity of probabilistic inference. Technical Report ICIS–R11003, Radboud University Nijmegen (2011)
Lacave, C., Díez, F.J.: A review of explanation methods for Bayesian networks. The Knowledge Engineering Review 17(2), 107–127 (2002)
Lipton, P.: Inference to the best explanation. Routledge (2004)
Nardone, D.A.: Collecting and analyzing data: Doing and thinking. In: Walker, H.K., Hall, W.D., Hurst, J.W. (eds.) Clinical Methods: The History, Physical, and Laboratory Examinations, ch. 2, 3rd edn., Butterworths, Boston (1990)
Park, J.D., Darwiche, A.: Approximating MAP using local search. In: Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, pp. 403–410. Morgan Kaufmann Publishers, San Francisco (2001)
Park, J.D., Darwiche, A.: Solving MAP exactly using systematic search. In: Proceedings of the 19th Annual Conference on Uncertainty in Artificial Intelligence (UAI 2003), pp. 459–468. Morgan Kaufmann (2003)
Park, J.D., Darwiche, A.: Complexity results and approximation settings for MAP explanations. Journal of Artificial Intelligence Research 21, 101–133 (2004)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, Palo Alto (1988)
Robertson, N., Seymour, P.D.: Graph minors II: Algorithmic aspects of tree-width. Journal of Algorithms 7, 309–322 (1986)
Yuan, C., Lu, T., Druzdzel, M.J.: Annealed MAP. In: Proceedings of the Twentieth Conference in Uncertainty in Artificial Intelligence, pp. 628–635. AUA (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kwisthout, J. (2013). Most Inforbable Explanations: Finding Explanations in Bayesian Networks That Are Both Probable and Informative. In: van der Gaag, L.C. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2013. Lecture Notes in Computer Science(), vol 7958. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39091-3_28
Download citation
DOI: https://doi.org/10.1007/978-3-642-39091-3_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39090-6
Online ISBN: 978-3-642-39091-3
eBook Packages: Computer ScienceComputer Science (R0)