Skip to main content

Learning Pseudo-independent Models: Analytical and Experimental Results

  • Conference paper
  • First Online:
Advances in Artificial Intelligence (Canadian AI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1822))

Abstract

Most algorithms to learn belief networks use single-link looka-head search to be efficient. It has been shown that such search procedures are problematic when applied to learning pseudo-independent (PI) models. Furthermore, some researchers have questioned whether PI models exist in practice.

We present two non-trivial PI models which derive from a social study dataset. For one of them, the learned PI model reached ultimate prediction accuracy achievable given the data only, while using slightly more inference time than the learned non-PI model. These models provide evidence that PI models are not simply mathematical constructs.

To develop efficient algorithms to learn PI models effectively we benefit from studying and understanding such models in depth. We further analyze how multiple PI submodels may interact in a larger domain model. Using this result, we show that the RML algorithm for learning PI models can learn more complex PI models than previously known.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. R.R. Bouckaert. Properties of Bayesian belief network learning algorithms. In R. Lopez de Mantaras and D. Poole, editors, Proc. 10th Conf. on Uncertainty in Artificial Intelligence, pages 102–109, Seattle, Washington, 1994. Morgan Kaufmann.

    Google Scholar 

  2. Statistics Canada. The 1993 genaral social survey-cycle 8: Personal risk, 1994.

    Google Scholar 

  3. D. Chickering, D. Geiger, and D. Heckerman. Learning Bayesian networks: serach methods and experimental results. In Proc. of 5th Conf. on Artificial Intelligence and Statistics, pages 112–128, Ft. Lauderdale, 1995. Society for AI and Statistics.

    Google Scholar 

  4. G.F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, (9):309–347, 1992.

    MATH  Google Scholar 

  5. N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic networks. In G.F. Cooper and S. Moral, editors, Proc. 14th Conf. on Uncertainty in Artificial Intelligence, pages 139–147, Madison, Wisconsin, 1998. Morgan Kaufmann.

    Google Scholar 

  6. D. Heckerman, D. Geiger, and D.M. Chickering. Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995.

    MATH  Google Scholar 

  7. E.H. Herskovits and G.F. Cooper. Kutato: an entropy-driven system for construction of probabilistic expert systems from database. In Proc. 6th Conf. on Uncertainty in Artificial Intelligence, pages 54–62, Cambridge, 1990.

    Google Scholar 

  8. J. Hu and Y. Xiang. Learning belief networks in domains with recursively embedded pseudo independent submodels. In Proc. 13th Conf. on Uncertainty in Artificial Intelligence, pages 258–265, Providence, 1997.

    Google Scholar 

  9. F.V. Jensen, S.L. Lauritzen, and K.G. Olesen. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly, (4):269–282, 1990.

    MathSciNet  Google Scholar 

  10. W. Lam and F. Bacchus. Learning Bayesian networks: an approach based on the MDL principle. Computational Intelligence, 10(3):269–293, 1994.

    Article  Google Scholar 

  11. S.L. Lauritzen and D.J. Spiegelhalter. Local computation with probabilities on graphical structures and their application to expert systems. J. Royal Statistical Society, Series B, (50):157–244, 1988.

    MATH  MathSciNet  Google Scholar 

  12. J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.

    Google Scholar 

  13. G. Shafer. Probabilistic Expert Systems. Society for Industrial and Applied Mathematics, Philadelphia, 1996.

    MATH  Google Scholar 

  14. S.K.M. Wong and Y. Xiang. Construction of a Markov network from data for probabilistic inference. In Proc. 3rd Inter. Workshop on Rough Sets and Soft Computing, pages 562–569, San Jose, 1994.

    Google Scholar 

  15. Y. Xiang. A characterization of single-link search in learning belief networks. In P. Compton H. MOtoda, R. Mizoguchi and H. Liu, editors, Proc. Pacific Rim Knowledge Acquisition Workshop, pages 218–33, 1998.

    Google Scholar 

  16. Y. Xiang. Towards understanding of pseudo-independent domains. In Poster Proc. 10th Inter. Symposium on Methodologies for Intelligent Systems, Oct 1997.

    Google Scholar 

  17. Y. Xiang, S.K.M. Wong, and N. Cercone. Critical remarks on single link search in learning belief networks. In Proc. 12th Conf. on Uncertainty in Artificial Intelligence, pages 564–571, Portland, 1996.

    Google Scholar 

  18. Y. Xiang, S.K.M. Wong, and N. Cercone. A ‘microscopic’ study of minimum entropy search in learning decomposable Markov networks. Machine Learning, 26(1):65–92, 1997.

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Xiang, Y., Hu, X., Cercone, N.J., Hamilton, H.J. (2000). Learning Pseudo-independent Models: Analytical and Experimental Results. In: Hamilton, H.J. (eds) Advances in Artificial Intelligence. Canadian AI 2000. Lecture Notes in Computer Science(), vol 1822. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45486-1_19

Download citation

  • DOI: https://doi.org/10.1007/3-540-45486-1_19

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67557-0

  • Online ISBN: 978-3-540-45486-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics