Skip to main content

A Direct Measure for the Efficacy of Bayesian Network Structures Learned from Data

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4571))

Abstract

Current metrics for evaluating the performance of Bayesian network structure learning includes order statistics of the data likelihood of learned structures, the average data likelihood, and average convergence time. In this work, we define a new metric that directly measures a structure learning algorithm’s ability to correctly model causal associations among variables in a data set. By treating membership in a Markov Blanket as a retrieval problem, we use ROC analysis to compute a structure learning algorithm’s efficacy in capturing causal associations at varying strengths. Because our metric moves beyond error rate and data-likelihood with a measurement of stability, this is a better characterization of structure learning performance. Because the structure learning problem is NP-hard, practical algorithms are either heuristic or approximate. For this reason, an understanding of a structure learning algorithm’s stability and boundary value conditions is necessary. We contribute to state of the art in the data-mining community with a new tool for understanding the behavior of structure learning techniques.

We acknowledge and thank the funding agent. This work was funded by the Office of Naval Research (ONR) Contract number N00014-05-C-0541. The opinions expressed in this document are those of the authors and do not necessarily reflect the opinion of the Office of Naval Research or the government of the United States of America.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chickering, D.: Learning equivalence classes of bayesian-network structures. Journal of Machine Learning Research 2, 445–498 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  2. Cooper, G.: Probabilistic inference using belief networks is np-hard. Artificial Intelligence 42(1), 393–405 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  3. Cooper, G., Herskovits, E.: A bayesian method for the induction of probabilistic networks from data. Mahcine Learning 9(4), 309–347 (1992)

    MATH  Google Scholar 

  4. Dagum, P., Luby, M.: Approximating probabilistic inference in bayesian belief networks is np-hard. Artificial Intelligence 60(1), 141–153 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  5. Heckerman, D., Geiger, D., Chickering, D.: Learning bayesian networsk: The combination of knowledge and statistical data. Machine Learning 20, 197–243 (1995)

    MATH  Google Scholar 

  6. Eaton, D., Murphy, K.: Bayesian structure learning using dynamic programming and mcmc. In: NIPS Workshop on Causality and Feature Selection (2006)

    Google Scholar 

  7. Faulkner, E.: K2ga: Heuristically guided evolution of bayesian network structures from data. In: IEEE Symposium on Computational Intelligence and Data Mining, 3 Innovation Way, Newark, DE. 19702, April 2007, IEEE Computer Society Press, Los Alamitos (2007)

    Google Scholar 

  8. Fawcett, T.: Roc graphs: Notes and practical considerations for data mining researchers. Technical Report HPL-2003-04, Hewlett Packard Research Labs (2003)

    Google Scholar 

  9. Friedman, N., Nachman, I., Peér, D.: Learning bayesian network structure from massive datasets: The sparse candidate algorithm. In: Proceedings of UAI, pp. 206–215 (1999)

    Google Scholar 

  10. Heckerman, D.: A tutorial on learning with bayesian networks (1995)

    Google Scholar 

  11. Pearl, J., Verma, T.S.: A theory of inferred causation. In: Allen, J.F., Fikes, R., Sandewall, E. (eds.) KR 1991. Principles of Knowledge Representation and Reasoning, San Mateo, California, pp. 441–452. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  12. Shaughnessy, P., Livingston, G.: Evaluating the causal explanatory value of bayesian network structure learning algorithms. In: AAAI Workshop on Evaluation Methods for Machine Learning (2006)

    Google Scholar 

  13. Singh, M., Valtorta, M.: Construction of bayesian network structures from data: A brief survey and an efficient algorithm. International Journal of Approximate Reasoning 12(2), 111–131 (1995)

    Article  MATH  Google Scholar 

  14. Spirtes, P., Glymour, C., Scheines, R.: Causation, prediction, and search. Springer, Heidelberg (1993)

    MATH  Google Scholar 

  15. Lam, W., Bacchus, F.: Learning bayesian belief networks: An approach based on the mdl principle. Comp. Int. 10, 269–293 (1994)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Petra Perner

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Holness, G.F. (2007). A Direct Measure for the Efficacy of Bayesian Network Structures Learned from Data. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2007. Lecture Notes in Computer Science(), vol 4571. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73499-4_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73499-4_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73498-7

  • Online ISBN: 978-3-540-73499-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics