Skip to main content
Log in

Image Quality Assessment via Inter-class and Intra-class Differences for Efficient Classification

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

With the development of data-centric artificial intelligence, more and more people pay attention to the importance of image information quality. Based on the core idea that images in datasets have different intra-class information richness and inter-class information overlaps, we propose a two-stage image quality assessment method. The images in the area with both a lower intra-class richness and a higher degree of inter-class overlap can provide more image information for the neural network, thus further improve the model performance. Experiments on two public image classification datasets for image classification (CIFAR10 and mini-ImageNet) show that the proposed image information quality assessment method can effectively distinguish high information quality images. Under the same budget, selecting images with higher image information quality can achieve better performances than lower image information quality (Testing accuracy: 1.69% higher on CIFAR10, 2.11% higher on mini-ImageNet).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Yang J, Guo X, Li Y, Marinello F, Ercisli S, Zhang Z (2022) A survey of few-shot learning in smart agriculture: developments, applications, and challenges. Plant Methods 18(1):1–12

    Article  Google Scholar 

  2. Nie J, Wang Y, Li Y et al (2022) Artificial intelligence and digital twins in sustainable agriculture and forestry: a survey. Turk J Agric For 46(5):642–661

    Article  Google Scholar 

  3. Xu C, Fu Y, Liu C, Wang C, Li J, Huang F, Zhang L, Xue X (2021) Learning dynamic alignment via meta-filter for few-shot learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp. 5182–5191

  4. Gong Y, Zhang Z, Wen J, Lan G, Xiao S (2023) Small ship detection of SAR images based on optimized feature pyramid and sample augmentation. IEEE J Select Topics Appl Earth Observ Remote Sen 16:7385–7392. https://doi.org/10.1109/JSTARS.2023.3302575

  5. Li Y, Chao X (2021) Toward sustainability: trade-off between data quality and quantity in crop pest recognition. Front Plant Sci 12:2959

    Article  Google Scholar 

  6. Aydın S (2020) Deep Learning classification of neuro-emotional phase domain complexity levels induced by affective video film clips. IEEE J Biomed Health Inform 24(6):1695–1702

    Article  Google Scholar 

  7. Li Y, Yang J, Zhang Z, Wen J, Kumar P (2023) Healthcare data quality assessment for cyber security intelligence. IEEE Trans Ind Inf 19(1):841–848. https://doi.org/10.1109/TII.2022.3190405

  8. Aydın S (2023) Investigation of global brain dynamics depending on emotion regulation strategies indicated by graph theoretical brain network measures at system level. Cogn Neurodyn 17(2): 31–344

  9. Serap Aydın, Akın Barış (2022) Machine learning classification of maladaptive rumination and cognitive distraction in terms of frequency specific complexity. Biomed Signal Process Control 77:103740

    Article  Google Scholar 

  10. Siddiqui Y, Valentin J, Nießner M (2020) Viewal: active learning with viewpoint entropy for semantic segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp. 9433–9443

  11. Zhang B, Li L, Yang S, Wang S, Zha Z-J, Huang Q (2020) State-relabeling adversarial active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp. 8756–8765

  12. Lin W, Gao Z, Li B (2020) Shoestring: graph-based semi-supervised classification with severely limited labeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4174–4182

  13. Li Y, Zhu H, Cheng Y, Wang W, Teo CS, Xiang C, Vadakkepat P, Lee TH (2021) Few-shot object detection via classification refinement and distractor retreatment. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp. 15 395–15 403

  14. Li Y, Chao X (2021) Distance-entropy: an effective indicator for selecting informative data. Front Plant Sci 12:818–895

    Google Scholar 

  15. Zhang Z, Li Y, Gong Y et al (2023) Dataset and baselines for IID and OOD image classification considering data quality and evolving environments. Int J Interact Multimed Artif Intell. https://doi.org/10.9781/ijimai.2023.01.007

    Article  Google Scholar 

  16. Bilgic M, Getoor L (2009) Link-based active learning. In: NIPS workshop on analyzing networks and learning with graphs vol. 4

  17. Guo Y (2010) Active instance sampling via matrix partition. Adv Neural Inf Process Syst 23

  18. Hasan M, Roy-Chowdhury AK (2015) Context aware active learning of activity recognition models. In: Proceedings of the IEEE international conference on computer vision pp. 4543–4551

  19. Yang J, Xiao S, Li A, Lan G, Wang H (2021) Detecting fake images by identifying potential texture difference. Futur Gener Comput Syst 125:127–135

    Article  Google Scholar 

  20. Yang J et al (2022) No reference quality assessment for screen content images using stacked auto encoders in pictorial and textual regions. IEEE Trans Cybern 52(5):2798–2810. https://doi.org/10.1109/TCYB.2020.3024627

  21. Yang J, Li A, Xiao S, Lu W, Gao X (2021) Mtd-net: learning to detect deepfakes images by multi-scale texture difference. IEEE Trans Inf Forensics Secur 16:4234–4245

    Article  Google Scholar 

  22. Agarwal S, Arora H, Anand S, Arora C (2020) Contextual diversity for active learning. In: European conference on computer vision. Springer, pp. 137–153

  23. Chaplot DS, Jiang H, Gupta S, Gupta A (2020) Semantic curiosity for active visual learning. In: European conference on computer vision. Springer, pp. 309–326

  24. Tang Y-P, Huang S-J (2019) Self-paced active learning: query the right thing at the right time. In: Proceedings of the AAAI conference on artificial intelligence 33(01):5117–5124

  25. Zhao G, Dougherty E, Yoon B-J, Alexander F, Qian X (2021) Uncertainty-aware active learning for optimal bayesian classifier. In: International conference on learning representations (ICLR 2021)

  26. Sener Ozan, Silvio Savarese (2017) "Active learning for convolutional neural networks: a core-set approach." arXiv preprint arXiv:1708.00489

  27. Yoo D, Kweon IS (2019) Learning loss for active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pp. 93–102

  28. Ash JT, Zhang C, Krishnamurthy A, Langford J et al (2019) Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv preprint arXiv:1906.03671

  29. Kim Kwanyoung et al (2021) Task-aware variational adversarial active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

  30. Zhang J, Khanna R, Kyrillidis A et al (2021) Bayesian coresets: revisiting the nonconvex optimization perspective. In: Proceedings of the international conference on artificial intelligence and statistics

  31. Parvaneh Amin et al (2022) Active learning by feature mixing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

  32. Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. Advances in neural information processing systems, vol. 30

  33. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images

  34. Vinyals O, Blundell C, Lillicrap T, Wierstra D et al (2016) Matching networks for one shot learning. Adv Neural Inform Process Syst vol. 29

  35. Sinha S, Ebrahimi S, Darrell T (2019) Variational adversarial active learning. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 5972–5981

  36. Yun J, Kim B, Kim J (2020) Weight decay scheduling and knowledge distillation for active learning. In: European conference on computer vision. Springer, pp. 431–447

  37. Wang Z, Zheng Q, Lu J, Zhou J (2020) Deep hashing with active pairwise supervision. In: European conference on computer vision. Springer, pp. 522–538

  38. Luo W, Schwing A, Urtasun R (2013) Latent structured active learning. Adv Neural Inform Process Syst vol. 26

  39. Settles B, Craven M (2008) An analysis of active learning strategies for sequence labeling tasks. In: proceedings of the 2008 conference on empirical methods in natural language processing pp. 1070–1079

  40. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (No.32101612, No.61871283), the authors would like to thank Tianjin University Laboratory of Artificial Intelligence and Marine Information Processing for support on paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Li.

Ethics declarations

Competing Interest

The authors declare no competing financial interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, J., Yang, Y., Li, Y. et al. Image Quality Assessment via Inter-class and Intra-class Differences for Efficient Classification. Neural Process Lett 55, 12169–12181 (2023). https://doi.org/10.1007/s11063-023-11414-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11414-x

Keywords

Navigation