Skip to main content
Log in

An evolving classification cascade with self-learning

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

Incremental learning is mostly accomplished by an incremental model relying on appropriate and adjustable architecture. The present paper introduces a hybrid evolving architecture for dealing with incremental learning. Consisting of two sequential and incremental learning modules: growing Gaussian mixture model (GGMM) and resource allocating neural network (RAN), the rationale of the architecture rests on two issues: incrementality and the possibility of partially labeled data processing in the context of classification. The two modules are coherent in the sense that both rely on Gaussian functions. While RAN trained by the extended Kalman filter is used for prediction, GGMM is dedicated to self-learning or pre-labeling of unlabeled data using a probabilistic framework. In addition, an incremental feature selection procedure is applied for continuously choosing the meaningful features. The empirical evaluation of the cascade studies various aspects in order to discuss the efficiency of the proposed hybrid learning architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. These two stages might be split into three as is the case in some proposals.

  2. With special acknowledgment to Prof. Hagras and Dr. Doctor for furnishing this study with the iDorm data.

References

  • Arandjelovic O, Cipolla R (2005) Incremental learning of temporally coherent gaussian mixture models. In: Proceedings of the the 16th British machine vision conference, pp 759–768

  • Banfield J, Raftery A (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics 49:803–821

    Article  MATH  MathSciNet  Google Scholar 

  • Bouchachia A (2008a) Incremental learning. In: Encyclopedia of data warehousing and mining, 2nd edn, Idea-group, pp 1006–1012

  • Bouchachia A (2008b) Learning with partial supervision. In: Encyclopedia of data warehousing and mining, 2nd edn, Idea-group, pp 1150–1157

  • Bouchachia A (2009) Incremental induction of fuzzy classification rules. In: IEEE workshop on evolving and self-developing intelligent systems, pp 32–39

  • Bouchachia A, Mittermeir R (2006) Towards fuzzy incremental classifiers. Soft Comput 11(2):193–207

    Article  Google Scholar 

  • Bouchachia A, Pedrycz W (2006) Enhancement of fuzzy clustering by mechanisms of partial supervision. Fuzzy Sets Syst 735(13):776–786

    MathSciNet  Google Scholar 

  • Chapelle O, Schölkopf B, Zien A (eds) (2006) Semi-supervised learning. MIT Press, Cambridge

    Google Scholar 

  • De R, Pal N, Pal S (1997) Feature analysis: neural network and fuzzy set theoretic approaches. Pattern Recognit 30(10):1579–1590

    Article  MATH  Google Scholar 

  • Declercq A, Piater J (2008) Online learning of gaussian mixture models—a two-level approach. In: Proceedings of the third international conference on computer vision theory and applications, pp 605–611

  • Duda P, Hart E, Stork D (2001) Pattern classification. Wiley, New York

    MATH  Google Scholar 

  • Hall P, Hicks Y (2004) A method to add gaussian mixture models. Technical report, University of Bath

  • Kadirkamanathan V, Niranjan M (1993) A function estimation approach to sequential learning with neural networks. Neural Comput 5(6):954–975

    Article  Google Scholar 

  • Lee D (2005) Effective Gaussian mixture learning for video background subtraction. IEEE Trans Pattern Anal Mach Intell 27(5):827–832

    Article  Google Scholar 

  • Platt J (1991) A resource allocation network for function interpolation. Neural Comput 3(2):213–225

    Article  MathSciNet  Google Scholar 

  • Radford N, Hinton G (1999) A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Learning in graphical models. MIT Press, Cambridge, pp 355–368

  • Roberts S, Holmes C, Denison D (2001) Minimum-entropy data partitioning using reversible jump markov chain monte carlo. IEEE Trans Pattern Anal Mach Intell 23(8):909–914

    Article  Google Scholar 

  • Song M, Wang H (2005) Highly efficient incremental estimation of gaussian mixture models for online data stream clustering. In: Intelligent computing: theory and applications. SPIE, pp 174–183

  • Stauffer C, Grimson W (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 246–252

    Google Scholar 

  • Stauffer C, Grimson W (2000) Learning patterns of activity using real-time tracking. IEEE Trans Pattern Anal Mach Intell 22(8):747–757

    Article  Google Scholar 

  • Widmer G, Kubat M (1996) Learning in the presence of concept drift and hidden contexts. Mach Learn 23:69–101

    Google Scholar 

  • Yang Z, Zwolinski M (2001) Mutual information theory for adaptive mixture models. IEEE Trans Pattern Anal Mach Intell 23(4):396–403

    Article  Google Scholar 

  • Yingwei L, Sundararajan N, Saratchandran P (1998) Performance evaluation of a sequential minimal radial basis function (rbf) neural network learning algorithm. IEEE Trans Neural Netw 9(2):308–318

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelhamid Bouchachia.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bouchachia, A. An evolving classification cascade with self-learning. Evolving Systems 1, 143–160 (2010). https://doi.org/10.1007/s12530-010-9014-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12530-010-9014-x

Keywords

Navigation