Abstract
Model selection for machine learning systems is one of the most important issues to be addressed for obtaining greater generalization capabilities. This paper proposes a strategy to achieve model selection incrementally under virtual concept drifting environments, where the distribution of learning samples varies over time. To carry out incremental model selection, the system generally uses all the learning samples that have been observed until now. Under virtual concept drifting environments, however, the distribution of the observed samples is considerably different from that under real concept drifting environments so that model selection is usually unsuccessful. To overcome this problem, the author had earlier proposed the weighted objective function and model-selection criterion based on the predictive input density of the learning samples. Although the previous method described in the author’s previous study shows good performances to some datasets, it occasionally fails to yield appropriate learning results because of the failure in the prediction of the actual input density. To overcome this drawback, the method proposed in this paper improves on the previously described method to yield the desired outputs using an ensemble of the constructed radial basis function neural networks (RBFNNs). Experimental results indicate that the improved method yields a stable performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Yamauchi, K.: Optimal incremental learning under covariate shift. Memetic Computing 1(4), 271–279 (2009)
Yamauchi, K.: Incremental learning and model selection under virtual concept drifting environments. To appear in the 2010 IEEE World Congress on Computational Intelligence (IEEE WCCI 2010) (2010)
Hidetoshi, S.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90(2), 227–244 (2000)
Sugiyama, M., Nakajima, S., Kashima, H., von Bünau, P., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Twenty-First Annual Conference on Neural Information Processing Systems (NIPS 2007) (December 2007)
Yamauchi, K., Hayami, J.: Incremental learning and model selection for radial basis function network through sleep. IEICE Transactions on Information and Systems E90-D(4), 722–735 (2007)
French, R.M.: Pseudo-recurrent connectionist networks: An approach to the “sensitivity stability” dilemma. Connection Science 9(4), 353–379 (1997)
Lòpez-Rubio, E.: Multivariate student-t self-organizing maps. Neural Networks 22, 1432–1447 (2009)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society B39(1), 1–38 (1977)
Akaike, H.: A new look at the statistical model identification. IEEE Transactions on Automatic Control AC-19(6), 716–723 (1974)
Sato, M., Ishii, S.: On-line EM algorithm for the normalized Gaussian network. Neural Computation 12, 407–432 (2000)
Moody, J., Darken, C.J.: Fast learning in neural networks of locally-tuned processing units. Neural Computation 1, 281–294 (1989)
Bezdek, J.C.: A convergence theorem for the fuzzy isodata clustering algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence 2, 1–8 (1980)
Platt, J.: A resource allocating network for function interpolation. Neural Computation 3(2), 213–225 (1991)
Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)
Yamakawa, H., Masumoto, D., Kimoto, T., Nagata, S.: Active data selection and subsequent revision for sequential learning with neural networks. In: World Congress of Neural Networks (WCNN 1994), vol. 3, pp. 661–666 (1994)
Ozawa, S., Toh, S.L., Abe, S., Pang, S., Kasabov, N.: Incremental learning of feature space and classifier for face recognition. Neural Networks 18, 575–584 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yamauchi, K. (2010). Incremental Model Selection and Ensemble Prediction under Virtual Concept Drifting Environments. In: Zhang, BT., Orgun, M.A. (eds) PRICAI 2010: Trends in Artificial Intelligence. PRICAI 2010. Lecture Notes in Computer Science(), vol 6230. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15246-7_52
Download citation
DOI: https://doi.org/10.1007/978-3-642-15246-7_52
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15245-0
Online ISBN: 978-3-642-15246-7
eBook Packages: Computer ScienceComputer Science (R0)