Skip to main content

Weighted Proportional k-Interval Discretization for Naive-Bayes Classifiers

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2637))

Included in the following conference series:

Abstract

The use of different discretization techniques can be expected to affect the classification bias and variance of naive-Bayes classifiers. We call such an effect discretization bias and variance. Proportional k-interval discretization (PKID) tunes discretization bias and variance by adjusting discretized interval size and number proportional to the number of training instances. Theoretical analysis suggests that this is desirable for naive-Bayes classifiers. However PKID is sub-optimal when learning from training data of small size. We argue that this is because PKID equally weighs bias reduction and variance reduction. But for small data, variance reduction can contribute more to lower learning error and thus should be given greater weight than bias reduction. Accordingly we propose weighted proportional k-interval discretization (WPKID), which establishes a more suitable bias and variance trade-off for small data while allowing additional training data to be used to reduce both bias and variance. Our experiments demonstrate that for naive-Bayes classifiers, WPKID improves upon PKID for smaller datasets with significant frequency; and WPKID delivers lower classification error significantly more often than not in comparison to three other leading alternative discretization techniques studied.

’Small ‘is a relative rather than an absolute term. Of necessity, we here utilize an arbitrary definition, deeming datasets with size no larger than 1000 as ’smaller‘ datasets, otherwise as ’larger‘ datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bay, S. D. The UCI KDD archive [http://kdd.ics.uci.edu], 1999. Department of Information and Computer Science, University of California, Irvine.

    Google Scholar 

  2. Blake, C. L., and Merz, C. J. UCI repository of machine learning databases [http://www.ics.uci.edu/~mlearn/mlrepository.html], 1998. Department of Information and Computer Science, University of California, Irvine.

    Google Scholar 

  3. Cestnik, B. Estimating probabilities: A crucial task in machine learning. In Proc. of the European Conf. on Artificial Intelligence (1990), pp. 147–149.

    Google Scholar 

  4. Domingos, P., and Pazzani, M. On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning 29 (1997), 103–130.

    Article  MATH  Google Scholar 

  5. Dougherty, J., Kohavi, R., and Sahami, M. Supervised and unsupervised discretization of continuous features. In Proc. of the Twelfth International Conf. on Machine Learning (1995), pp. 194–202.

    Google Scholar 

  6. Fayyad, U. M., and Irani, K. B. Multi-interval discretization of continuous-valued attributes for classification learning. In Proc. of the Thirteenth International Joint Conf. on Artificial Intelligence (1993), pp. 1022–1027.

    Google Scholar 

  7. Friedman, J. H. On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery 1,1 (1997), 55–77.

    Article  Google Scholar 

  8. Gama, J., Torgo, L., and Soares, C. Dynamic discretization of continuous attributes. In Proc. of the Sixth Ibero-American Conf. on AI (1998), pp. 160–169.

    Google Scholar 

  9. Hsu, C. N., Huang, H. J., and Wong, T. T. Why discretization works for naive Bayesian classifiers. In Proc. of the Seventeenth International Conf. on Machine Learning (2000), pp. 309–406.

    Google Scholar 

  10. Hussain, F., Liu, H., Tan, C. L., and Dash, M. Discretization: An enabling technique, 1999. Technical Report, TRC6/99, School of Computing, National University of Singapore.

    Google Scholar 

  11. Johnson, R., and Bhattacharyya, G.Statistics: Principles and Methods. John Wiley & Sons Publisher, 1985.

    Google Scholar 

  12. Kohavi, R., and Wolpert, D. Bias plus variance decomposition for zero-one loss functions. In Proc. of the Thirteenth International Conf. on Machine Learning (1996), pp. 275–283.

    Google Scholar 

  13. Kong, E. B., and Dietterich, T. G. Error-correcting output coding corrects bias and variance. In Proc. of the Twelfth International Conf. on Machine Learning (1995), pp. 313–321.

    Google Scholar 

  14. Kononenko, I. Naive Bayesian classifier and continuous attributes. Informatica 16,1 (1992), 1–8.

    MathSciNet  Google Scholar 

  15. Kononenko, I. Inductive and Bayesian learning in medical diagnosis. Applied Artificial Intelligence 7 (1993), 317–337.

    Article  Google Scholar 

  16. Mora, L., Fortes, I., Morales, R., and Triguero, F. Dynamic discretization of continuous values from time series. In Proc. of the Eleventh European Conf. on Machine Learning (2000), pp. 280–291.

    Google Scholar 

  17. Pazzani, M. J. An iterative improvement approach for the discretization of numeric attributes in Bayesian classifiers. In Proc. of the First International Conf. on Knowledge Discovery and Data Mining (1995).

    Google Scholar 

  18. Quinlan, J. R.C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.

    Google Scholar 

  19. Scheaffer, R. L., and McClave, J. T.Probability and Statistics for Engineers, fourth ed. Duxbury Press, 1995.

    Google Scholar 

  20. Silverman, B.Density Estimation for Statistics and Data Analysis. Chapman and Hall Ltd., 1986.

    Google Scholar 

  21. Torgo, L., and Gama, J. Search-based class discretization. In Proc. of the Ninth European Conf. on Machine Learning (1997), pp. 266–273.

    Google Scholar 

  22. Webb, G. I. Multiboosting: A technique for combining boosting and wagging. Machine Learning 40,2 (2000), 159–196.

    Article  Google Scholar 

  23. Weiss, N. A.Introductory Statistics, vol. Sixth Edition. Greg Tobin, 2002.

    Google Scholar 

  24. Yang, Y., and Webb, G. I. Proportional k-interval discretization for naive-Bayes classifiers. In Proc. of the Twelfth European Conf. on Machine Learning (2001), pp. 564–575.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, Y., Webb, G.I. (2003). Weighted Proportional k-Interval Discretization for Naive-Bayes Classifiers. In: Whang, KY., Jeon, J., Shim, K., Srivastava, J. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2003. Lecture Notes in Computer Science(), vol 2637. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36175-8_50

Download citation

  • DOI: https://doi.org/10.1007/3-540-36175-8_50

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-04760-5

  • Online ISBN: 978-3-540-36175-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics