Abstract
Learning strategies under covariate shift have recently been widely discussed. Under covariate shift, the density of learning inputs is different from that of test inputs. In such environments, learning machines need to employ special learning strategies to acquire a greater capability to generalize through learning.
However, incremental learning methods are also for learning in non-stationary learning environments, which would represent a kind of covariate-shift. However, the relation between covariate shift environments and incremental learning environments has not been adequately discussed.
This paper focuses on the covariate shift in incremental learning environments and our re-construction of a suitable incremental learning method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Yoneda, T., Yamanaka, M., Kakazu, Y.: Study on optimization of grinding conditions using neural networks – a method of additional learning. Journal of the Japan Society of Precision Engineering/Seimitsu kogakukaishi 58(10), 1707–1712 (1992)
Yamakawa, H., Masumoto, D., Kimoto, T., Nagata, S.: Active data selection and subsequent revision for sequential learning with neural networks. In: World congress of neural networks (WCNN 1994), vol. 3, pp. 661–666 (1994)
Schaal, S., Atkeson, C.G.: Constructive incremental learning from only local information. Neural Computation 10(8), 2047–2084 (1998)
Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)
French, R.M.: Pseudo-recurrent connectionist networks: An approach to the “sensitivity stability” dilemma. Connection Science 9(4), 353–379 (1997)
Ans, B., Roussert, S.: Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting. Connection Science 12(1), 1–19 (2000)
Kasabov, N.: Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning. IEEE Transactions on Systems, Man, and Cybernetics 31(6), 902–918 (2001)
Ozawa, S., Toh, S.L., Abe, S., Pang, S., Kasabov, N.: Incremental learning of feature space and classifier for face recognition. Neural Networks 18, 575–584 (2005)
Hidetoshi, S.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90(2), 227–244 (2000)
Sugiyama, M., Nakajima, S., Kashima, H., von Bünau, P., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Twenty-First Annual Conference on Neural Information Processing Systems (NIPS2007) (December 2007)
Yamauchi, K., Hayami, J.: Incremental learning and model selection for radial basis function network through sleep. IEICE Transactions on Information and Systems E90-D(4), 722–735 (2007)
Ngo, C.-W., Pong, T.-C., Zhang, H.-J.: On clustering and retrieval of video shots through temporal slice analysis. IEEE Transactions on Multimedia 4(4), 446–458 (2002)
Platt, J.: A resource allocating network for function interpolation. Neural Computation 3(2), 213–225 (1991)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yamauchi, K. (2009). Covariate Shift and Incremental Learning. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_140
Download citation
DOI: https://doi.org/10.1007/978-3-642-02490-0_140
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02489-4
Online ISBN: 978-3-642-02490-0
eBook Packages: Computer ScienceComputer Science (R0)