Skip to main content

Covariate Shift and Incremental Learning

  • Conference paper
Advances in Neuro-Information Processing (ICONIP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5506))

Included in the following conference series:

Abstract

Learning strategies under covariate shift have recently been widely discussed. Under covariate shift, the density of learning inputs is different from that of test inputs. In such environments, learning machines need to employ special learning strategies to acquire a greater capability to generalize through learning.

However, incremental learning methods are also for learning in non-stationary learning environments, which would represent a kind of covariate-shift. However, the relation between covariate shift environments and incremental learning environments has not been adequately discussed.

This paper focuses on the covariate shift in incremental learning environments and our re-construction of a suitable incremental learning method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yoneda, T., Yamanaka, M., Kakazu, Y.: Study on optimization of grinding conditions using neural networks – a method of additional learning. Journal of the Japan Society of Precision Engineering/Seimitsu kogakukaishi 58(10), 1707–1712 (1992)

    Article  Google Scholar 

  2. Yamakawa, H., Masumoto, D., Kimoto, T., Nagata, S.: Active data selection and subsequent revision for sequential learning with neural networks. In: World congress of neural networks (WCNN 1994), vol. 3, pp. 661–666 (1994)

    Google Scholar 

  3. Schaal, S., Atkeson, C.G.: Constructive incremental learning from only local information. Neural Computation 10(8), 2047–2084 (1998)

    Article  Google Scholar 

  4. Yamauchi, K., Yamaguchi, N., Ishii, N.: Incremental learning methods with retrieving interfered patterns. IEEE Transactions on Neural Networks 10(6), 1351–1365 (1999)

    Article  Google Scholar 

  5. French, R.M.: Pseudo-recurrent connectionist networks: An approach to the “sensitivity stability” dilemma. Connection Science 9(4), 353–379 (1997)

    Article  Google Scholar 

  6. Ans, B., Roussert, S.: Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting. Connection Science 12(1), 1–19 (2000)

    Article  Google Scholar 

  7. Kasabov, N.: Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning. IEEE Transactions on Systems, Man, and Cybernetics 31(6), 902–918 (2001)

    Article  Google Scholar 

  8. Ozawa, S., Toh, S.L., Abe, S., Pang, S., Kasabov, N.: Incremental learning of feature space and classifier for face recognition. Neural Networks 18, 575–584 (2005)

    Article  Google Scholar 

  9. Hidetoshi, S.: Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90(2), 227–244 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  10. Sugiyama, M., Nakajima, S., Kashima, H., von Bünau, P., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Twenty-First Annual Conference on Neural Information Processing Systems (NIPS2007) (December 2007)

    Google Scholar 

  11. Yamauchi, K., Hayami, J.: Incremental learning and model selection for radial basis function network through sleep. IEICE Transactions on Information and Systems E90-D(4), 722–735 (2007)

    Article  Google Scholar 

  12. Ngo, C.-W., Pong, T.-C., Zhang, H.-J.: On clustering and retrieval of video shots through temporal slice analysis. IEEE Transactions on Multimedia 4(4), 446–458 (2002)

    Article  Google Scholar 

  13. Platt, J.: A resource allocating network for function interpolation. Neural Computation 3(2), 213–225 (1991)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yamauchi, K. (2009). Covariate Shift and Incremental Learning. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_140

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02490-0_140

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02489-4

  • Online ISBN: 978-3-642-02490-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics