Skip to main content

Cascading Customized Naïve Bayes Couple

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6085))

Included in the following conference series:

  • 2575 Accesses

Abstract

Naïve Bayes (NB) is an efficient and effective classifier in many cases. However, NB might suffer from poor performance when its conditional independence assumption is violated. While most recent research focuses on improving NB by alleviating the conditional independence assumption, we propose a new Meta learning technique to scale up NB by assuming an altered strategy to the traditional Cascade Learning (CL). The new Meta learning technique is more effective than the traditional CL and other Meta learning techniques such as Bagging and Boosting techniques while maintaining the efficiency of Naïve Bayes learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baur, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36, 105–139 (1999)

    Article  Google Scholar 

  2. Breiman, L.: Bagging Predictors. Machine Learning 24(3), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  3. Chawla, N.V., Japkowicz, N., Kolcz, A.: Editorial to the special issue on learning from imbalanced data sets. ACM SIGKDD Explorations 6(1), 1–6 (2004)

    Article  Google Scholar 

  4. Demsar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)

    MathSciNet  Google Scholar 

  5. Domingos, P., Pazzani, M.: Beyond independence: Conditions for the optimality of the sample Bayesian classifier. Machine Learning 29, 103–130 (1997)

    Article  MATH  Google Scholar 

  6. Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. A Wiley Intersience Publication, Hoboken (2000)

    Google Scholar 

  7. Elkan, C.: Boosting and Naïve Bayesian Learning. Technical Report CS97-557, University of California, Davis (1997)

    Google Scholar 

  8. Fahlman, S., Lebiere, C.: The cascade-correlation learning architecture. In: Touretzky, D. (ed.) Advances in Neural Information Processing Systems, vol. 2, pp. 524–532. Morgan Kaufman, San Mateo (1990)

    Google Scholar 

  9. Freund, Y., Schapire, R.E.: A short Introduction to Boosting. Journal of Japanese Society for Artificial Intelligence 14(5), 771–780 (1999)

    Google Scholar 

  10. Friedman, N., Geiger, D., Goldszmith, M.: Bayesian network classifiers. Machine Learning 29, 131–163 (1997)

    Article  MATH  Google Scholar 

  11. Gama, J., Brazdil, P.: Cascade generalization. Machine Learning 41, 315–343 (2000)

    Article  MATH  Google Scholar 

  12. Hettich, S., Bay, S.D.: The UCI KDD Archive. University of California, Department of Information and Computer Science, Irvine, CA (1999), http://kdd.ics.uci.edu

  13. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive Mixtures of Local Experts. Neural Computation 3, 79–97 (1988)

    Article  Google Scholar 

  14. Jiang, L., Zhang, H.: Weightily Averaged One-Dependence Estimators. In: Proceedings of the 9th Biennial Pacific Rim International Conference on Artificial Intelligence, pp. 970–974 (2006)

    Google Scholar 

  15. Jordan, M., Jacobs, R.: Hierarchical Mixtures of Experts and the EM Algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  16. Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)

    Google Scholar 

  17. Langley, P.: Induction of recursive Bayesian classifiers. In: Brazdil, P.B. (ed.) ECML 1993. LNCS, vol. 667, pp. 153–164. Springer, Heidelberg (1993)

    Google Scholar 

  18. Langley, P., Iba, W., Thompson, K.: An analysis of Bayesian classifiers. In: Proceedings of the 10th National Conference on Artificial Intelligence, pp. 223–228. AAAI Press and MIT Press (1992)

    Google Scholar 

  19. Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

  20. Rennie, J., Shih, L., Teevan, J., Karger, D.: Tackling the Poor Assumptions of Naive Bayes Text Classifiers. In: Proceedings of International Conference on Machine Learning, pp. 616–623 (2003)

    Google Scholar 

  21. Stocki, T.J., Blanchard, X., D’Amours, R., Ungar, R.K., Fontaine, J.P., Sohier, M., Bean, M., Taffary, T., Racine, J., Tracy, B.L., Brachet, G., Jean, M., Meyerhof, D.: Automated radioxenon monitoring for the comprehensive nuclear-test-ban treaty in two distinctive locations: Ottawa and Tahiti. J. Environ.Radioactivity 80, 305–326 (2005)

    Article  Google Scholar 

  22. Sullivan, J.D.: The comprehensive test ban treaty. Physics Today 151 (1998)

    Google Scholar 

  23. Ting, K., Zheng, Z.: A study of Adaboost with naive Bayesian classifiers: weakness and improvement. Computational Intelligence 19(2), 186–200 (2003)

    Article  MathSciNet  Google Scholar 

  24. Ting, K., Witten, I.: Issues in Stacked Generalization. Journal of Artificial Intelligence Research 10, 271–289 (1999)

    MATH  Google Scholar 

  25. Webb, G.I.: MultiBoosting: A technique for combining boosting and wagging. Machine Learning 40(2), 159–196 (2000)

    Article  Google Scholar 

  26. Webb, G.I., Boughton, J., Wang, Z.: Not So Naive Bayes: Aggregating One-Dependence Estimators. Machine Learning 58(1), 5–24 (2005)

    Article  MATH  Google Scholar 

  27. Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

  28. Wolpert, D.: Stacked generalization. Neural Networks 5, 241–260 (1992)

    Article  Google Scholar 

  29. Zhang, H., Jiang, L., Su, J.: Hidden Naive Bayes. In: Twentieth National Conference on Artificial Intelligence, pp. 919–924 (2005)

    Google Scholar 

  30. Zheng, F., Webb, G.I.: Efficient lazy elimination for averaged-one dependence estimators. In: Proceedings of the 23th International Conference on Machine Learning, pp. 1113–1120 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Li, G., Japkowicz, N., Stocki, T.J., Ungar, R.K. (2010). Cascading Customized Naïve Bayes Couple. In: Farzindar, A., Kešelj, V. (eds) Advances in Artificial Intelligence. Canadian AI 2010. Lecture Notes in Computer Science(), vol 6085. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13059-5_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13059-5_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13058-8

  • Online ISBN: 978-3-642-13059-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics