Abstract
Most algorithms to learn belief networks use single-link looka-head search to be efficient. It has been shown that such search procedures are problematic when applied to learning pseudo-independent (PI) models. Furthermore, some researchers have questioned whether PI models exist in practice.
We present two non-trivial PI models which derive from a social study dataset. For one of them, the learned PI model reached ultimate prediction accuracy achievable given the data only, while using slightly more inference time than the learned non-PI model. These models provide evidence that PI models are not simply mathematical constructs.
To develop efficient algorithms to learn PI models effectively we benefit from studying and understanding such models in depth. We further analyze how multiple PI submodels may interact in a larger domain model. Using this result, we show that the RML algorithm for learning PI models can learn more complex PI models than previously known.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
R.R. Bouckaert. Properties of Bayesian belief network learning algorithms. In R. Lopez de Mantaras and D. Poole, editors, Proc. 10th Conf. on Uncertainty in Artificial Intelligence, pages 102–109, Seattle, Washington, 1994. Morgan Kaufmann.
Statistics Canada. The 1993 genaral social survey-cycle 8: Personal risk, 1994.
D. Chickering, D. Geiger, and D. Heckerman. Learning Bayesian networks: serach methods and experimental results. In Proc. of 5th Conf. on Artificial Intelligence and Statistics, pages 112–128, Ft. Lauderdale, 1995. Society for AI and Statistics.
G.F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, (9):309–347, 1992.
N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic networks. In G.F. Cooper and S. Moral, editors, Proc. 14th Conf. on Uncertainty in Artificial Intelligence, pages 139–147, Madison, Wisconsin, 1998. Morgan Kaufmann.
D. Heckerman, D. Geiger, and D.M. Chickering. Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995.
E.H. Herskovits and G.F. Cooper. Kutato: an entropy-driven system for construction of probabilistic expert systems from database. In Proc. 6th Conf. on Uncertainty in Artificial Intelligence, pages 54–62, Cambridge, 1990.
J. Hu and Y. Xiang. Learning belief networks in domains with recursively embedded pseudo independent submodels. In Proc. 13th Conf. on Uncertainty in Artificial Intelligence, pages 258–265, Providence, 1997.
F.V. Jensen, S.L. Lauritzen, and K.G. Olesen. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly, (4):269–282, 1990.
W. Lam and F. Bacchus. Learning Bayesian networks: an approach based on the MDL principle. Computational Intelligence, 10(3):269–293, 1994.
S.L. Lauritzen and D.J. Spiegelhalter. Local computation with probabilities on graphical structures and their application to expert systems. J. Royal Statistical Society, Series B, (50):157–244, 1988.
J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
G. Shafer. Probabilistic Expert Systems. Society for Industrial and Applied Mathematics, Philadelphia, 1996.
S.K.M. Wong and Y. Xiang. Construction of a Markov network from data for probabilistic inference. In Proc. 3rd Inter. Workshop on Rough Sets and Soft Computing, pages 562–569, San Jose, 1994.
Y. Xiang. A characterization of single-link search in learning belief networks. In P. Compton H. MOtoda, R. Mizoguchi and H. Liu, editors, Proc. Pacific Rim Knowledge Acquisition Workshop, pages 218–33, 1998.
Y. Xiang. Towards understanding of pseudo-independent domains. In Poster Proc. 10th Inter. Symposium on Methodologies for Intelligent Systems, Oct 1997.
Y. Xiang, S.K.M. Wong, and N. Cercone. Critical remarks on single link search in learning belief networks. In Proc. 12th Conf. on Uncertainty in Artificial Intelligence, pages 564–571, Portland, 1996.
Y. Xiang, S.K.M. Wong, and N. Cercone. A ‘microscopic’ study of minimum entropy search in learning decomposable Markov networks. Machine Learning, 26(1):65–92, 1997.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Xiang, Y., Hu, X., Cercone, N.J., Hamilton, H.J. (2000). Learning Pseudo-independent Models: Analytical and Experimental Results. In: Hamilton, H.J. (eds) Advances in Artificial Intelligence. Canadian AI 2000. Lecture Notes in Computer Science(), vol 1822. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45486-1_19
Download citation
DOI: https://doi.org/10.1007/3-540-45486-1_19
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67557-0
Online ISBN: 978-3-540-45486-1
eBook Packages: Springer Book Archive