Skip to main content

Advertisement

Log in

An integrated classification model for incremental learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Incremental Learning is a particular form of machine learning that enables a model to be modified incrementally, when new data becomes available. In this way, the model can adapt to the new data without the lengthy and time-consuming process required for complete model re-training. However, existing incremental learning methods face two significant problems: 1) noise in the classification sample data, 2) poor accuracy of modern classification algorithms when applied to modern classification problems. In order to deal with these issues, this paper proposes an integrated classification model, known as a Pre-trained Truncated Gradient Confidence-weighted (Pt-TGCW) model. Since the pre-trained model can extract and transform image information into a feature vector, the integrated model also shows its advantages in the field of image classification. Experimental results on ten datasets demonstrate that the proposed method outperform the original counterparts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Ben-David S, Kushilevitz E, Mansour Y (1997) Online learning versus offline learning. Mach Learn 29(1):45–63

    Article  Google Scholar 

  2. Bottou L (2012) Stochastic gradient descent tricks. Neural networks: tricks of the trade. https://doi.org/10.1007/978-3-642-35289-8_25

  3. Crammer K, Dekel et al (2006) Online passive-aggressive algorithms. J Mach Learn Res 7(3):551–585

    MathSciNet  MATH  Google Scholar 

  4. Crammer K, Dredze M, Kulesza (2009) Multi-class confidence weighted algorithms. Conference on Empirical Methods in Natural Language Processing, pp. 496–504

  5. Crammer K, Kulesza A, Dredze (2013) Adaptive regularization of weight vectors. Machine Learning, pp. 345–352

  6. Dredze M, Crammer K, Pereira F (2008) Confidence-weighted linear classification. International Conference DBLP, pp. 264–271

  7. Dredze M, Crammer K, Pereira F (2008) Confidence-weighted linear classification. Proceedings of the 25th international conference on Machine learning, pp. 264–271

  8. Duchi J, Singer Y (2009) Efficient online and batch learning using forward backward splitting. J Mach Learn Res 10:2899–2934

    MathSciNet  MATH  Google Scholar 

  9. Freund Y, Schapire RE (1999) Large margin classification using the perceptron algorithm. Mach Learn 37(3):277–296

    Article  Google Scholar 

  10. Giraud-Carrier C (2000) A note on the utility of incremental learning. AI Commun 13(4):215–223

    MATH  Google Scholar 

  11. Hu J, Yan C, Liu X, et al (2019) Truncated gradient confidence-weighted based online learning for imbalance streaming data. IEEE International Conference on Multimedia and Expo (ICME) pp. 133-138

  12. Langford J, Li L, Zhang T (2009) Sparse on learning via truncated gradient. J Mach Learn Res 10:777–801

    MathSciNet  MATH  Google Scholar 

  13. Orabona F, Crammer K (2010) New adaptive algorithms for online classification. Advances in Neural Information Processing Systems, pp 1840–1848

  14. Rossenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65:386–408

  15. Schmidhuber J (2015) Deep Learning in Neural Networks: An Overview. Neural Netw 61:85–117

    Article  Google Scholar 

  16. Shalev-Shwartz S (2012) Online learning and online convex optimization. Found Trends® Mach Learn 4(2):107–194

    Article  Google Scholar 

  17. Simonyan K, Zisserman A (2014) Very deep convolution networks for large-scale image recognition. In ICLR. https://arxiv.org/abs/1409.1556

  18. Syed NA, Huan S, Kah L, Sung K (1999) Incremental learning with support vector machines. IEEE International Conference on Data Mining, IEEE Computer Society, 1999

  19. Wang J, Zhao P, Hoi SCH (2014) Cost-sensitive online classification. IEEE Trans Knowl Data Eng 26(10):2425–2438

  20. Wang J, Zhao P, Hoi SCH (2016) Soft confidence-weighted learning. ACM. 8:1–32. https://doi.org/10.1145/2932193

    Article  Google Scholar 

  21. Xie H, Yang D, Sun N, Chen Z, Zhang Y (2019) Automated pulmonary nodule detection in CT images using deep convolution neural networks. Pattern Recogn 85:109–119

    Article  Google Scholar 

  22. Xie H, Fang S, Zha Z, Yang Y et al (2019) Convolution attention networks for scene text recognition. ACM Trans Multimed Comput Commun Appl 15:1–17

    Article  Google Scholar 

  23. Kivinen, J. , Smola, A. J. , & Williamson, R. C.(2004) Learning with Kernels. IEEE Transactions on Signal Processing 52(8):2165–2176

  24. Yang L, Jin R, Lansing E, et al. (2009) Online learning by ellipsoid method. International Conference on Machine Learning. ACM

  25. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? International Conference on Neural Information Processing Systems, MIT Press, 2014

  26. Beichen Zhang, Liang Li, et al. (2020) State-relabeling adversarial active learning, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8753–8762

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ji Hu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, J., Yan, C., Liu, X. et al. An integrated classification model for incremental learning. Multimed Tools Appl 80, 17275–17290 (2021). https://doi.org/10.1007/s11042-020-10070-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-10070-w

Keywords