Abstract
Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE k that finds the most relevant k-ary explanation and prove that the decision problem of MRE k is \(NP^{\it PP}\)-complete. Since MRE needs to find the best solution by MRE k over all k, and we can also show that MRE is in \(NP^{\it PP}\), we conjecture that a decision problem of MRE is \(NP^{\it PP}\)-complete as well. Furthermore, we show that MRE remains in \(NP^{\it PP}\) even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables. These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments efficiently.
Similar content being viewed by others
References
Andrieu, C., de Freitas, N., Doucet, A.: Reversible jump MCMC simulated annealing for neural networks. In: Proceedings, 16th Annual Conference on Uncertainty in Artificial Intelligence (UAI-00), pp. 11–18. Morgan Kaufmann, San Francisco (2000)
Cook, S.: The complexity of theorem proving procedures. In: Proceedings, 3rd Annual ACM Symposium on Theory of Computing, pp. 151–158 (1971)
Fitelson, B.: Studies in Bayesian Confirmation Theory. Ph.D. thesis, Philosophy Department, University of Wisconsin, Madison (2001)
Fitelson, B.: Likelihoodism, Bayesianism, and relational confirmation. Synthese 156(3), 473–489 (2007)
Gill, J.: Computational complexity of probabilistic Turing machines. SIAM J. Comput. 6(4), 675–695 (1977)
Good, I.: Weight of evidence: a brief survey. Bayesian Stat. 2, 249–270 (1985)
Green, P.: Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrica 82, 711–732 (1995)
Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
Lin, Y., Druzdzel, M.J.: Relevance-based sequential evidence processing in Bayesian networks. In: Proceedings, Uncertain Reasoning in Artificial Intelligence Track of the Eleventh International Florida Artificial Intelligence Research Symposium (FLAIRS–98), pp. 446–450. AAAI Press/The MIT Press, Menlo Park (1998)
Littman, M., Goldsmith, J., Mundhenk, M.: The computational complexity of probabilistic planning. J. Artif. Intell. Res. 9, 1–36 (1998)
Lu, T.C., Przytula, K.W.: Focusing strategies for multiple fault diagnosis. In: Proceedings, 19th International FLAIRS Conference (FLAIRS-06), pp. 842–847. Malbourne Beach, FL (2006)
Molina, L., Belanche, L., Nebot, A.: Feature selection algorithms: a survey and experimental evaluation. In: Proceedings, 2002 IEEE International Conference on Data Mining, pp. 306–313 (2002)
Park, J.D., Darwiche, A.: Approximating MAP using local search. In: Proceedings, 17th Conference on Uncertainty in Artificial Intelligence (UAI-01), pp. 403–410. Morgan Kaufmann, San Francisco (2001)
Park, J.D., Darwiche, A.: Complexity results and approximation strategies for MAP explanations. J. Artif. Intell. Res. 21, 101–133 (2004)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo (1988)
Yuan, C.: Some properties of Most Relevant Explanation. In: Proceedings, 21st International Joint Conference on Artificial Intelligence ExaCt Workshop (ExaCt-09), pp. 118–125. Pasadena, CA (2009)
Yuan, C., Liu, X., Lu, T.C., Lim, H.: Most Relevant Explanation: properties, algorithms, and evaluations. In: Proceedings, 25th Conference on Uncertainty in Artificial Intelligence (UAI-09), pp. 631–638. Montreal, Canada (2009)
Yuan, C., Lu, T.C.: A general framework for generating multivariate explanations in Bayesian networks. In: Proceedings, 23rd National Conference on Artificial Intelligence (AAAI-08), pp. 1119–1124 (2008)
Yuan, C., Lu, T., Druzdzel, M.: Annealed MAP. In: Proceedings, 20th Annual Conference on Uncertainty in Artificial Intelligence (UAI–04), pp. 628–635. AUAI Press, Arlington (2004)
Yuan, C., Lim, H., Lu, T.C.:Most relevant explanation in Bayesian networks. J. Artif. Intell. Res. 42, 309–352 (2011)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yuan, C., Lim, H. & Littman, M.L. Most Relevant Explanation: computational complexity and approximation methods. Ann Math Artif Intell 61, 159–183 (2011). https://doi.org/10.1007/s10472-011-9260-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10472-011-9260-z
Keywords
- Most Relevant Explanation
- Computational complexity
- \(NP^{\it PP}\)-complete
- Reversible jump MCMC
- Local search