Elsevier

Artificial Intelligence

Volume 29, Issue 3, September 1986, Pages 241-288
Artificial Intelligence

Fusion, propagation, and structuring in belief networks

https://doi.org/10.1016/0004-3702(86)90072-XGet rights and content

Abstract

Belief networks are directed acyclic graphs in which the nodes represent propositions (or variables), the arcs signify direct dependencies between the linked propositions, and the strengths of these dependencies are quantified by conditional probabilities. A network of this sort can be used to represent the generic knowledge of a domain expert, and it turns into a computational architecture if the links are used not merely for storing factual knowledge but also for directing and activating the data flow in the computations which manipulate this knowledge.

The first part of the paper deals with the task of fusing and propagating the impacts of new information through the networks in such a way that, when equilibrium is reached, each proposition will be assigned a measure of belief consistent with the axioms of probability theory. It is shown that if the network is singly connected (e.g. tree-structured), then probabilities can be updated by local propagation in an isomorphic network of parallel and autonomous processors and that the impact of new information can be imparted to all propositions in time proportional to the longest path in the network.

The second part of the paper deals with the problem of finding a tree-structured representation for a collection of probabilistically coupled propositions using auxiliary (dummy) variables, colloquially called “hidden causes.” It is shown that if such a tree-structured representation exists, then it is possible to uniquely uncover the topology of the tree by observing pairwise dependencies among the available propositions (i.e., the leaves of the tree). The entire tree structure, including the strengths of all internal relationships, can be reconstructed in time proportional to n log n, where n is the number of leaves.

References (35)

  • J. Doyle

    A truth maintenance system

    Artificial Intelligence

    (1979)
  • J. Pearl et al.

    Structuring causal trees

    J. Complexity

    (1986)
  • A. Rosenfeld et al.

    Scene labelling by relaxation operations

    IEEE Trans. Syst. Man Cybern.

    (1976)
  • J.R. Anderson
  • C.K. Chow et al.

    Approximating discrete probability distributions with dependence trees

    IEEE Trans. Inf. Theory

    (1968)
  • R. Dechter et al.

    The anatomy of easy problems: A constraint-satisfaction formulation

  • G.S. Dell

    Positive feedback in hierarchical connectionist models: Applications to language production

    Cognitive Sci.

    (1985)
  • R.O. Duda et al.

    Subjective Bayesian methods for rule-based inference systems

  • E.C. Freuder

    A sufficient condition of backtrack-free search

    J. ACM

    (1982)
  • S. Geman et al.

    Stochastic relaxations, Gibbs distributions, and the Bayesian restoration of images

    IEEE Trans. Pattern Anal. Machine Intelligence

    (1984)
  • G.E. Hinton et al.

    Boltzman machines: Constraint satisfaction networks that learn

  • R.A. Howard et al.

    Influence diagrams

  • R. Jeffrey
  • J.G. Kemeny et al.
  • J. Kim

    CONVINCE: A CONVersational INference Consolidation engine

  • J. Kim et al.

    A computational model for combined causal and diagnostic reasoning in inference systems

  • S.L. Lauritzen
  • Cited by (1603)

    View all citing articles on Scopus

    This work was supported in part by the National Science Foundation, Grant#DSR 83–13875.

    View full text