Skip to main content
Log in

SSIT: a sample selection-based incremental model training method for image recognition

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In the big data environment, the expansion of image data sets makes the image recognition process need to adapt to sample characteristics and data distribution changes. In this case, image recognition research focuses on finding the balance point of incremental learning in the stability-plasticity dilemma under limited computing and storage resources. The existing incremental learning methods have disadvantages in generalization performance, iteration rounds, convergence speed, and data category imbalance, so it is essential to study the incremental learning methods for image recognition training. In this study, a sample selection-based incremental model training method is proposed for image recognition. The training process is improved by optimizing the training samples needed for each iteration. A generalization error-based category determination method is proposed to avoid the imbalance of training samples. A sample selection method based on dynamic weight is proposed to avoid the problem of increasing recognition gain. At last, experiments show that this method can enhance the generalization ability of the model. At the same time, it can meet the goal of balancing the recognition effect, reducing the number of iterations, and accelerating convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Bittencourt Marciele M, Silva Renato Moraes, Almeida Tiago A (2020) ML-MDLText: An efficient and lightweight multilabel text classifier with incremental learning. Appl. Soft Comput. 96:106699

    Article  Google Scholar 

  2. Gantz J, Reinsel D (2012) The digital universe in 2020: big data, bigger digital shadows, and biggest growth in the far east[J]. IDC iView: IDC Analyze the future 2012(2007):1–16

    Google Scholar 

  3. Ma Lianbo, Cheng Shi, Shi Yuhui (2020) Enhancing learning efficiency of brain storm optimization via orthogonal learning design. IEEE transactions on systems Man, and Cybernetics: Systems. https://doi.org/10.1109/TSMC.2020.2963943

    Article  Google Scholar 

  4. Zhang J, Shao K, Luo X (2018) Small sample image recognition using improved convolutional neural network. J Vis Commun Image Represent 55:640–647

    Article  Google Scholar 

  5. Ma L, Huang M, Yang S, Wang R, Wang X (2021) An Adaptive Localized Decision Variable Analysis Approach to Large-Scale Multi-objective and Many-objective Optimization. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2020.3041212

    Article  Google Scholar 

  6. French RM (1999) Catastrophic forgetting in connectionist networks[J]. Trends Cogn Sci 3(4):128–135

    Article  Google Scholar 

  7. Xiao T, Zhang J, Yang K, et al. 2014. Error-driven incremental learning in deep convolutional neural network for large-scale image classification[C]. Proceedings of the 22nd ACM international conference on Multimedia. ACM. 177–186

  8. Ma L, Wang X, Huang M, Lin Z, Tian L, Chen H (2019) Two-level master-slave RFID networks planning via hybrid multi-objective artificial bee colony optimizer. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49(5):861–880

    Article  Google Scholar 

  9. Ma L, Kunyuan Hu, Zhu Y, Chen H (2014) Cooperative artificial bee colony algorithm for multi-objective RFID network planning. J Netw Comput Appl 42:143–162

    Article  Google Scholar 

  10. Rama Murthy Garimella, Maha Lakshmi Bairaju, G. C. Jyothi Prasanna, Vidya Sree Vankam, Manasa Jagannadan: Deep Neural Networks: Incremental Learning. IntelliSys (1) 2020: 620–629.

  11. Elwell R, Polikar R (2011) Incremental Learning of Concept Drift in Nonstationary Environments. IEEE Trans Neural Networks 22(10):1517–1531

    Article  Google Scholar 

  12. Zhou Z, Shin J, Zhang L, et al. 2017. Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally[C]. IEEE conference on computer vision and pattern recognition, Hawaii. 340-7349

  13. Taocheng Hu, Jinhui Yu (2014) Incremental max-margin learning for semi-supervised multi-class problem. SNPD 2015:31–43

    Google Scholar 

  14. Frederico Damasceno Bortoloti, Patrick Marques. 2016. Ciarelli: Achieving semi-supervised incremental learning with Learn++ and simple recycled selection. EAIS. 138–145

  15. Haoran Liu, Zhaoyu SU, Yongii Liu, Livue Zhang, Rongrong Yin, Zhang Ying. 2019. An Improved Incremental Structure Learning Algorithm for Bayesian Networks. ICSAI. 505-510

  16. K. Liu, Y. Cui, J. Ren, P. Li. 2017. An Improved Particle Swarm Optimization Algorithm for Bayesian Network Structure Learning via Local Information Constraint, IEEE access

  17. Chefrour A, Souici-Meslati L, Difi I, Bakkouche N (2019) A novel incremental learning algorithm based on incremental support vector machina and incremental neural network learn++. Rev d’Intelligence Artif 33(3):181–188

    Article  Google Scholar 

  18. Polikar R, Udpa L, Udpa SS, Honavar V (2001) Learn++: An Incremental Learning Algorithm for Supervised Neural Networks. IEEE Transactions on systems, man, and cybernetics-part c: applications and reviews 31(4):497–508

    Article  Google Scholar 

  19. Grossberg S (1988) Nonlinear neural networks: Principles, mechanisms, and architectures[J]. Neural Netw 1(1):17–61

    Article  Google Scholar 

  20. Chang H. S., Learned-Miller E., Mccallum A. 2017. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples, In Advances in Neural Information Processing System. 1002-1012

  21. Brust C A , C Käding, Denzler J . Active and Incremental Learning with Weak Supervision[J]. KI - Künstliche Intelligenz, 2020(2)

  22. Zhang Q, Yang LT, Chen Z, Li P (2020) Incremental deep computation model for wireless big data feature learning. IEEE Trans Big Data 6(2):248–257

    Article  Google Scholar 

  23. Shu W, Qian W, Xie Y (2019) Knowledge Acquisition Approach Based on Incremental Objects From Data With Missing Values. IEEE Access 7:54863–54878

    Article  Google Scholar 

  24. Garcia C, Leite DF, Skrjanc I (2020) incremental missing-data imputation for evolving fuzzy granular prediction. IEEE Trans Fuzzy Syst 28(10):2348–2362

    Article  Google Scholar 

  25. Fredrik Lund Henriksen, Rune Jensen, Håkon Kvale Stensland, Dag Johansen, Michael Alexander Riegler, Pål Halvorsen: Performance of Data Enhancements and Training Optimization for Neural Network: A Polyp Detection Case Study. CBMS 2019: 287–293.

  26. Mayu Omiya, Yusuke Horiuchi, Edgar Simo-Serra, Satoshi Iizuka, Hiroshi Ishikawa. 2019. Optimization-Based Data Generation for Photo Enhancement. CVPR Workshops. 1890–1898.

  27. CIFAR-10 and CIFAR-100 dataset. https://www.cs.toronto.edu/~kriz/ cifar.html.

  28. The Oxford-IIIT Pet Dataset. http://www.robots.ox.ac.uk/%7Evgg/data/pets/.

  29. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/.

  30. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arxiv:1409.1556 2014.

  31. Iandola F, Moskewicz M, Karayev , et al. DenseNet: Implementing Efficient ConvNet Descriptor Pyramids[J]. Eprint Arxiv, 2014. https://arxiv.org/pdf/1404.1869.pdf.

  32. Assran M, Ballas N, Castrejon L, et al. Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations. 2020. https://arxiv.org/pdf/2006.10803.pdf.

  33. Goyal P, Mahajan D, Gupta A, et al. 2019. Scaling and Benchmarking Self-Supervised Visual Representation Learning[C] IEEE/CVF International Conference on Computer Vision. IEEE

  34. Liu J, Yu H, Yang W, et al. 2015. Combining Active Learning and Semi-Supervised Learning Based on Extreme Learning Machine for Multi-class Image Classification[C]. International Conference on Intelligent Science and Big Data Engineering. Springer International Publishing

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yichuan Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Liu, Y., Yang, G. et al. SSIT: a sample selection-based incremental model training method for image recognition. Neural Comput & Applic 34, 3117–3134 (2022). https://doi.org/10.1007/s00521-021-06515-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06515-4

Keywords

Navigation