Skip to main content
Log in

Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation

Cognitive Computation Aims and scope Submit manuscript

Abstract

Incremental learning enables continuous model adaptation based on a constantly arriving data stream. It is a way relevant to human cognitive system, which learns to predict objects in a changing world. Incremental learning for character recognition is a typical scenario that characters appear sequentially and the font/writing style changes irregularly. In the paper, we investigate how to classify characters incrementally (i.e., input patterns appear once at a time). A reasonable assumption is that adjacent characters from the same font or the same writer share the same style in a short period while style variation occurs in characters printed by different fonts or written by different persons during a long period. The challenging issue here is how to take advantage of the local style consistency and adapt to the continuous style variation as well incrementally. For this purpose, we propose a continuous incremental adaptive learning vector quantization (CIALVQ) method, which incrementally learns a self-adaptive style transfer matrix for mapping input patterns from style-conscious space onto style-free space. After style transformation, this problem is casted into a common character recognition task and an incremental learning vector quantization (ILVQ) classifier is used. In this framework, we consider two learning modes: supervised incremental learning and active incremental learning. In the latter mode, samples receiving low confidence from the classifier are requested class labels. We evaluated the classification performance of CIALVQ in two scenarios, interleaved test-then-train and style-specific classification on NIST hand-printed data sets. The results show that local style consistency improves the accuracies of both two test scenarios, and for both supervised and active incremental learning modes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. Bransford JD, Brown AL, Cocking RR. How people learn: brain, mind, experience, and school. National Academy Press. 2000.

  2. Gros C. Cognitive computation with autonomously active neural networks: an emerging field[J]. Cogn Comput. 2009;1(1):77–90.

    Article  Google Scholar 

  3. Gong C, Tao D, Liu W, Liu L, Yang J. Label propagation via teaching-to-learn and learning-to-teach[J]. IEEE Trans Neural Netw Learn Syst. 2017;28(6):1452–65.

    Article  PubMed  Google Scholar 

  4. Gong C, Tao D, Maybank SJ, Liu W, Kang G, Yang J. Multi-modal curriculum learning for semi-supervised image classification[J]. IEEE Trans Image Process. 2016;25(7):3249–60.

    Article  PubMed  Google Scholar 

  5. Gong C, Tao D, Yang J, Liu W. Teaching-to-learn and learning-to-teach for multi-label propagation[C]. AAAI. 2016. p. 1610–16.

  6. Gong C. Exploring commonality and individuality for multi-modal curriculum learning[C]. AAAI. 2017. p. 1926–33.

  7. Syed N, Liu H, Sung K. Incremental learning with support vector machines[C]. In: International joint conference on artificial intelligence. Sweden: Morgan Kaufmann Publishers. 1999. p. 352–6.

  8. Hoi SCH, Wang J, Zhao P. Libol: A library for online learning algorithms[J]. J Mach Learn Res. 2014; 15(1):495–9.

    Google Scholar 

  9. Ding S, Zhang J, Jia H, et al. An adaptive density data stream clustering algorithm[J]. Cogn Comput. 2016;8(1):30–38.

    Article  Google Scholar 

  10. Gepperth A, Karaoguz C. A bio-inspired incremental learning architecture for applied perceptual problems[J]. Cogn Comput. 2016;8(5):924–34.

    Article  Google Scholar 

  11. Trier Ø D, Jain AK, Taxt T. Feature extraction methods for character recognition-a survey[J]. Pattern Recogn. 1996;29(4):641–62.

    Article  Google Scholar 

  12. Sarkar P, Nagy G. Style consistent classification of isogenous patterns[J]. IEEE Trans Pattern Anal Mach Intell. 2005;27(1):88–98.

    Article  PubMed  Google Scholar 

  13. Abraham WC, Robins A. Memory retention the synaptic stability versus plasticity dilemma. Trends Neurosci. 2005;28(2):73–8.

    Article  CAS  PubMed  Google Scholar 

  14. Zhang XY, Liu CL. Writer adaptation with style transfer mapping[J]. IEEE Trans Pattern Anal Mach Intell. 2013;35(7):1773–87.

    Article  PubMed  Google Scholar 

  15. Kohonen T. Improved versions of learning vector quantization[C]. In: International joint conference on neural networks. 1990. p. 545–550.

  16. Kohonen T. The self-organizing map[J]. Proc IEEE. 1990;78(9):1464–80.

    Article  Google Scholar 

  17. Jin XB, Liu CL, Hou X. Regularized margin-based conditional log-likelihood loss for prototype learning[J]. Pattern Recogn. 2010;43(7):2428–38.

    Article  Google Scholar 

  18. Shen YY, Liu CL. Incremental learning vector quantization for character recognition with local style consistency[C]. In: Proceeding of the 8th international conference in brain inspired cognitive systems. 2016. p. 228–39.

    Google Scholar 

  19. Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386–408.

    Article  CAS  PubMed  Google Scholar 

  20. Oza NC. Online bagging and boosting[C]. IEEE International Conference on Systems, Man and Cybernetics:2340–45. 2005.

  21. Liu X, Yu T. Gradient feature selection for online boosting[C]. ICCV. 2007. p. 18.

  22. Saffari A, Leistner C, Santner J, et al. On-line random forests[C] In: Computer vision workshops (ICCV Workshops). 2009. p. 1393–400.

  23. Kirstein S, Wersing H, Körner E. A biologically motivated visual memory architecture for online learning of objects[J]. Neural Netw. 2008;21(1):65–77.

    Article  PubMed  Google Scholar 

  24. Xu Y, Shen F, Zhao J. An incremental learning vector quantization algorithm for pattern classification[J]. Neural Comput Appl. 2012;21(6):1205–15.

    Article  Google Scholar 

  25. Cesa-Bianchi N, Conconi A, Gentile C. A second-order perceptron algorithm[J]. SIAM J Comput. 2005; 34(3):640–68.

    Article  Google Scholar 

  26. Crammer K, Dredze M, Kulesza A. Multi-class confidence weighted algorithms[C]. In: Proceedings of the conference on empirical methods in natural language processing. 2009. p. 496–504.

  27. Ushiku Y, Hidaka M, Harada T. Three guidelines of online learning for large-scale visual recognition[C]. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 3574–81.

  28. Gama J, žliobaitė I, Bifet A, Pechenizkiy M, Bouchachia A. A survey on concept drift adaptation[J]. ACM Comput Surv 2014;46(4).

  29. Veeramachaneni S, Nagy G. Adaptive classifiers for multisource OCR[J]. Int J Doc Anal Recogn. 2003;6(3): 154–66.

    Article  Google Scholar 

  30. Veeramachaneni S, Nagy G. Style context with second-order statistics[J]. IEEE Trans Pattern Anal Mach Intell. 2005;27(1):14– 22.

    Article  PubMed  Google Scholar 

  31. Huang Z, Ding K, Jin L, et al. Writer adaptive online handwriting recognition using incremental linear discriminant analysis[C]. In: IEEE Proceedings of the conference on document analysis and recognition. 2009. p. 91–5.

  32. Ding K, Jin L. Incremental MQDF learning for writer adaptive handwriting recognition[C]. In: Proceedings of the conference on frontiers in handwriting recognition (ICFHR). 2010. p. 559– 64.

  33. Bishop CM. Pattern recognition and machine learning. New York: Springer; 2006.

    Google Scholar 

  34. Schleif FM, Hammer B, Villmann T. Margin-based active learning for LVQ networks[J]. Neurocomputing. 2007;70(7):1215–24.

    Article  Google Scholar 

  35. Grother PJ. Handprinted forms and character database, NIST special database 19. Technical Report and CDROM. 1995.

  36. Liu CL, Sako H, Fujisawa H. Performance evaluation of pattern classifiers for handwritten character recognition[J]. Int J Doc Anal Recogn. 2002;4(3):191–204.

    Article  Google Scholar 

  37. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization[J]. J Mach Learn Res. 2011;12:2121–59.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng-Lin Liu.

Ethics declarations

Funding

This work has been supported in part by the Strategic Priority Research Program of the CAS Grant XDB02060009 and the National Natural Science Foundation of China (NSFC) Grant 61411136002.

Conflict of Interests

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, YY., Liu, CL. Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation. Cogn Comput 10, 334–346 (2018). https://doi.org/10.1007/s12559-017-9491-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-017-9491-3

Keywords

Navigation