Skip to main content
Log in

Learning Instance Weighted Naive Bayes from labeled and unlabeled data

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

In real-world data mining applications, it is often the case that unlabeled instances are abundant, while available labeled instances are very limited. Thus, semi-supervised learning, which attempts to benefit from large amount of unlabeled data together with labeled data, has attracted much attention from researchers. In this paper, we propose a very fast and yet highly effective semi-supervised learning algorithm. We call our proposed algorithm Instance Weighted Naive Bayes (simply IWNB). IWNB firstly trains a naive Bayes using the labeled instances only. And the trained naive Bayes is used to estimate the class membership probabilities of the unlabeled instances. Then, the estimated class membership probabilities are used to label and weight unlabeled instances. At last, a naive Bayes is trained again using both the originally labeled data and the (newly labeled and weighted) unlabeled data. Our experimental results based on a large number of UCI data sets show that IWNB often improves the classification accuracy of original naive Bayes when available labeled data are very limited.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The estimated class membership probabilities are normalized.

References

  • Blum, A., & Chawla, S. (2001). Learning from labeled and unlabeled data using graph mincuts. In Proceedings of the eighteenth international conference on machine learning (pp. 19–26). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Chapelle, O., Schölkopf, B., & Zien, A. (2006). Semi-supervised learning. Cambridge: MIT.

    Google Scholar 

  • Driessens, K., Reutemann, P., Pfahringer, B., & Leschi, C. (2006). Using weighted nearest neighbor to benefit from unlabeled data. In W.-K. Ng, M. Kitsuregawa, J. Li, & K. Chang (Eds.), PAKDD 2006. LNCS (LNAI) (Vol. 3918, pp. 60–69). Heidelberg: Springer.

    Google Scholar 

  • Elkan, C. (1997). Boosting and naive Bayesian learning. Technical Report CS97-557, University of California, San Diego.

  • Frank, E., Hall, M., & Pfahringer, B. (2003). Locally weighted naive Bayes. In Proceedings of the conference on uncertainty in artificial intelligence (2003) (pp. 249–256). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Jiang, L., Cai, Z., & Wang, D. (2010). Improving naive Bayes for classification. International Journal of Computers and Applications, 32(3), 328–332.

    Article  Google Scholar 

  • Jiang, L., Wang, D., Cai, Z., & Yan, X. (2007). Survey of improving naive Bayes for classification. In Proceedings of the 3rd international conference on advanced data mining and applications, ADMA 2007, LNAI (Vol. 4632, pp. 134–145).

  • Joachims, T. (1999). Transductive inference for text classification using support vector machines. In I. Bratko, & S. Dzeroski (Eds.), Proceedings of ICML99, 16th international conference on machine learning (pp. 200–209). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Jones, R. (2005). Learning to extract entities from labeled and unlabeled text. Technical Report CMU-LTI-05-191, Doctoral Dissertation, Carnegie Mellon University.

  • Kohavi, R. (1996). Scaling Up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In Proceedings of the second international conference on knowledge discovery and data mining (KDD-96) (pp. 202–207). Cambridge: AAAI.

    Google Scholar 

  • Merz, C., Murphy, P., & Aha, D. (1997). UCI repository of machine learning databases. Irvine: Dept. of ICS, University of California. http://www.ics.uci.edu/mlearn/MLRepository.html.

  • Nadeau, C., & Bengio, Y. (2003). Inference for the generalization error. Machine Learning, 52(3), 239-281.

    Article  MATH  Google Scholar 

  • Nigam, K., McCallum, A. K., Thrun, S., & Mitchell, T. (2000). Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2–3), 103–134.

    Article  MATH  Google Scholar 

  • Rosenberg, C., Hebert, M., & Schneiderman, H. (2005). Semi-supervised selftraining of object detection models. In Seventh IEEE workshop on applications of computer vision.

  • Seeger, M. (2001). Learning with labeled and unlabeled data. Technical Report, Edinburgh University, UK.

  • Witten, I. H., & Frank, E. (2005). Data mining: Practical machine learning tools and techniques (2nd ed.). San Francisco: Morgan Kaufmann. http://prdownloads.sourceforge.net/weka/datasets-UCI.jar.

  • Zhu, X. (2006). Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin at Madison, Madison, WI.

Download references

Acknowledgements

We thank anonymous reviewers for their valuable comments and suggestions. The work was supported by the National Natural Science Foundation of China (No. 60905033), the Provincial Natural Science Foundation of Hubei (No. 2009CDB139), and the Fundamental Research Funds for the Central Universities (No. CUG090109).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liangxiao Jiang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jiang, L. Learning Instance Weighted Naive Bayes from labeled and unlabeled data. J Intell Inf Syst 38, 257–268 (2012). https://doi.org/10.1007/s10844-011-0153-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-011-0153-8

Keywords

Navigation