Skip to main content

Learning Bayesian Classifiers from Dependency Network Classifiers

  • Conference paper
Adaptive and Natural Computing Algorithms (ICANNGA 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4431))

Included in the following conference series:

Abstract

In this paper we propose a new method for learning Bayesian network classifiers in an indirect way instead of directly from data. This new model is a classifier based on dependency networks [1] that is a probabilistic graphical model similar to Bayesian networks but in which directed cycles are allowed. The benefits from doing things in this way are that learning process for dependency networks can be easier and simpler than learning Bayesian networks, with the direct consequence that learning algorithms could have good properties about scalability. We show that it is possible to take advantage of this facility to get Bayesian networks classifiers without losing quality in classification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Heckerman, D., Chickering, D.M., Meek, C., Rounthwaite, R., Kadie, C.: Dependency Networks for Inference, Collaborative Filtering, and Data Visualization. Journal of Machine Learning Research 1, 49–75 (2000)

    Article  Google Scholar 

  2. Hulten, G., Chickering, D.M., Heckerman, D.: Learning Bayesian Networks From Dependency Networks: A Preliminary Study. In: Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics (2001)

    Google Scholar 

  3. Sahami, M.: Learning Limited Dependence Bayesian Classifiers. In: Second International Conference on Knowledge Discovery in Databases, pp. 335–338 (1996)

    Google Scholar 

  4. Newman, D., Hettich, S., Blake, C., Merz, C.: UCI Repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  5. Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: International Joint conference on Artificial Intellegence (IJCAI), pp. 1022–1029. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  6. Dietterich, T.G.: Approximate statistical test for comparing supervised classification learning algorithms. Neural Computation 10(7), 1895–1923 (1998)

    Article  Google Scholar 

  7. Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian Network Classifiers. Machine Learning 29(2-3), 131–163 (1997)

    Article  MATH  Google Scholar 

  8. Duda, R.O., Hart, P.E.: Pattern classification and scene analisys. John Wiley and Sons, Chichester (1973)

    Google Scholar 

  9. Langley, P., Iba, W., Thompson, K.: An Analisys of bayesian Classifiers. In: Proceedings of the 10th National Conference on Artificial Intelligence, pp. 223–228 (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bartlomiej Beliczynski Andrzej Dzielinski Marcin Iwanowski Bernardete Ribeiro

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Gámez, J.A., Mateo, J.L., Puerta, J.M. (2007). Learning Bayesian Classifiers from Dependency Network Classifiers. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2007. Lecture Notes in Computer Science, vol 4431. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71618-1_90

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-71618-1_90

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-71589-4

  • Online ISBN: 978-3-540-71618-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics