Abstract
In real world settings there is situation where class distribution of data may change after classifier is built resulting in performance degradation of classifier. Attempts to solve this problem from previous Class Distribution Estimation method (CDE method) yield quite interesting performance however we notice there is some flaw since CDE method still have some bias toward train data thus we decide to improve them with ensemble method. Our Class Distribution Estimation-Ensemble (CDE-EM) methods estimate class distribution from many models instead of one resulting in less bias than previous method. All methods are evaluated using accuracy on set of benchmark UCI data sets. Experimental results demonstrate that our methods yield better performance if class distribution of test data is different from train data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Alaiz-Rodriguez, R., Guerrero-Curieses, A., Cid-Sueiro, J.: Minimax.: Regret Classifier for Imprecise Class Distributions. Journal of Machine Learning Research 8 (2007)
Alaiz-RodrÃguez, R., Japkowicz, N.: Assessing the Impact of Changing Environments on Classifier Performance. In: Bergler, S. (ed.) Canadian AI. LNCS (LNAI), vol. 5032, pp. 13–24. Springer, Heidelberg (2008)
Asuncion, A., Newman, D.J.: UCI Machine Learning Repository, (2007), http://www.ics.uci.edu/~mlearn/MLReposito-ry.html
Chapelle, O., Scholkopf, B., Zien, A.: Semi-Supervised Learning. MIT Press, Cambridge (2006)
Chawla, N.V., Bowyer, K.W., Kegelmeyer, W.P.: SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16 (2002)
Elkan, C.: The foundations of cost-sensitive learning. In: The Proceedings of the 17th International Joint Conference on Artificial Intelligence, pp. 973–978 (2001)
Forman, G.: Quantifying counts and costs via classification. Data Mining Knowledge Discovery 17(2) (2008)
Forman, G.: Counting Positives Accurately Despite Inaccurate Classification. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 564–575. Springer, Heidelberg (2005)
Latinne, P., Saerens, M., Decaestecker, C.: Adjusting the Outputs of a Classifier to New a Priori Probabilities May Significantly Improve Classification Accuracy: Evidence from a multi-class problem in remote sensing. In: The Proceedings of the 18th International Conference on Machine Learning, pp. 298–305 (2001)
Provost, F., Fawcett, T.: Robust Classification for Imprecise Environments. Machine Learning 42(3) (2001)
Provost, F., Fawcett, T., Kohavi, R.: The Case against Accuracy Estimation for Comparing Induction Algorithms. In: The Proceedings of the 15th International Conference on Machine Learning, pp. 445–453 (1998)
Quinlan, R.J.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993)
Rokach, L.: Ensemble-based classifiers. Artificial Intelligence Review 33 (2010)
Saerens, M., Latinne, P., Decaestecker, C.: Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computing 14 (2002)
Tsymbal, A.: The problem of concept drift: Definitions and related work. Computer Science Department. Trinity College Dublin (2004)
Weiss, G.M.: Mining with rarity: a unifying framework. SIGKDD Explorations Newsletter 6(1) (2004)
Weiss, G.M., Provost, F.: Learning when training data are costly: The effect of class distribution on tree induction. Journal of Artificial Intelligence Research 19 (2003)
Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann (2005)
Xue, J.C., Weiss, G.M.: Quantification and semi-supervised classification methods for handling changes in class distribution. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 897–906. ACM, Paris (2009)
Zadrozny, B., Elkan, C.: Learning and making decisions when costs and probabilities are both unknown. In: The Proceedings of the 7th International Conference on Knowledge Discovery and Data Mining, pp. 204–213 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Limsetto, N., Waiyamai, K. (2011). Handling Concept Drift via Ensemble and Class Distribution Estimation Technique. In: Tang, J., King, I., Chen, L., Wang, J. (eds) Advanced Data Mining and Applications. ADMA 2011. Lecture Notes in Computer Science(), vol 7121. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25856-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-25856-5_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-25855-8
Online ISBN: 978-3-642-25856-5
eBook Packages: Computer ScienceComputer Science (R0)