Abstract
Naïve Bayes (NB) is an efficient and effective classifier in many cases. However, NB might suffer from poor performance when its conditional independence assumption is violated. While most recent research focuses on improving NB by alleviating the conditional independence assumption, we propose a new Meta learning technique to scale up NB by assuming an altered strategy to the traditional Cascade Learning (CL). The new Meta learning technique is more effective than the traditional CL and other Meta learning techniques such as Bagging and Boosting techniques while maintaining the efficiency of Naïve Bayes learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Baur, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36, 105–139 (1999)
Breiman, L.: Bagging Predictors. Machine Learning 24(3), 123–140 (1996)
Chawla, N.V., Japkowicz, N., Kolcz, A.: Editorial to the special issue on learning from imbalanced data sets. ACM SIGKDD Explorations 6(1), 1–6 (2004)
Demsar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7, 1–30 (2006)
Domingos, P., Pazzani, M.: Beyond independence: Conditions for the optimality of the sample Bayesian classifier. Machine Learning 29, 103–130 (1997)
Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. A Wiley Intersience Publication, Hoboken (2000)
Elkan, C.: Boosting and Naïve Bayesian Learning. Technical Report CS97-557, University of California, Davis (1997)
Fahlman, S., Lebiere, C.: The cascade-correlation learning architecture. In: Touretzky, D. (ed.) Advances in Neural Information Processing Systems, vol. 2, pp. 524–532. Morgan Kaufman, San Mateo (1990)
Freund, Y., Schapire, R.E.: A short Introduction to Boosting. Journal of Japanese Society for Artificial Intelligence 14(5), 771–780 (1999)
Friedman, N., Geiger, D., Goldszmith, M.: Bayesian network classifiers. Machine Learning 29, 131–163 (1997)
Gama, J., Brazdil, P.: Cascade generalization. Machine Learning 41, 315–343 (2000)
Hettich, S., Bay, S.D.: The UCI KDD Archive. University of California, Department of Information and Computer Science, Irvine, CA (1999), http://kdd.ics.uci.edu
Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive Mixtures of Local Experts. Neural Computation 3, 79–97 (1988)
Jiang, L., Zhang, H.: Weightily Averaged One-Dependence Estimators. In: Proceedings of the 9th Biennial Pacific Rim International Conference on Artificial Intelligence, pp. 970–974 (2006)
Jordan, M., Jacobs, R.: Hierarchical Mixtures of Experts and the EM Algorithm. Neural Computation 6, 181–214 (1994)
Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)
Langley, P.: Induction of recursive Bayesian classifiers. In: Brazdil, P.B. (ed.) ECML 1993. LNCS, vol. 667, pp. 153–164. Springer, Heidelberg (1993)
Langley, P., Iba, W., Thompson, K.: An analysis of Bayesian classifiers. In: Proceedings of the 10th National Conference on Artificial Intelligence, pp. 223–228. AAAI Press and MIT Press (1992)
Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 399–406. Morgan Kaufmann, San Francisco (1994)
Rennie, J., Shih, L., Teevan, J., Karger, D.: Tackling the Poor Assumptions of Naive Bayes Text Classifiers. In: Proceedings of International Conference on Machine Learning, pp. 616–623 (2003)
Stocki, T.J., Blanchard, X., D’Amours, R., Ungar, R.K., Fontaine, J.P., Sohier, M., Bean, M., Taffary, T., Racine, J., Tracy, B.L., Brachet, G., Jean, M., Meyerhof, D.: Automated radioxenon monitoring for the comprehensive nuclear-test-ban treaty in two distinctive locations: Ottawa and Tahiti. J. Environ.Radioactivity 80, 305–326 (2005)
Sullivan, J.D.: The comprehensive test ban treaty. Physics Today 151 (1998)
Ting, K., Zheng, Z.: A study of Adaboost with naive Bayesian classifiers: weakness and improvement. Computational Intelligence 19(2), 186–200 (2003)
Ting, K., Witten, I.: Issues in Stacked Generalization. Journal of Artificial Intelligence Research 10, 271–289 (1999)
Webb, G.I.: MultiBoosting: A technique for combining boosting and wagging. Machine Learning 40(2), 159–196 (2000)
Webb, G.I., Boughton, J., Wang, Z.: Not So Naive Bayes: Aggregating One-Dependence Estimators. Machine Learning 58(1), 5–24 (2005)
Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)
Wolpert, D.: Stacked generalization. Neural Networks 5, 241–260 (1992)
Zhang, H., Jiang, L., Su, J.: Hidden Naive Bayes. In: Twentieth National Conference on Artificial Intelligence, pp. 919–924 (2005)
Zheng, F., Webb, G.I.: Efficient lazy elimination for averaged-one dependence estimators. In: Proceedings of the 23th International Conference on Machine Learning, pp. 1113–1120 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Li, G., Japkowicz, N., Stocki, T.J., Ungar, R.K. (2010). Cascading Customized Naïve Bayes Couple. In: Farzindar, A., Kešelj, V. (eds) Advances in Artificial Intelligence. Canadian AI 2010. Lecture Notes in Computer Science(), vol 6085. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13059-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-13059-5_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13058-8
Online ISBN: 978-3-642-13059-5
eBook Packages: Computer ScienceComputer Science (R0)