Skip to main content
Log in

The error sample feature compensation method for improving the robustness of underwater classification and recognition models

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

With the intensification of ocean exploration and development in recent years, the navigation equipment in the marine environment has become increasingly diversified, making the marine environment more complex. Traditional methods for underwater target recognition are gradually becoming less applicable and unable to achieve better results. With the application of deep learning in underwater target recognition, the robustness of deep learning models used for underwater target recognition is crucial due to the significant environmental interference in underwater target data and the susceptibility of deep learning models to adversarial samples. This paper proposes an error sample feature compensation method for improving the robustness of deep learning models for underwater target recognition, focusing on the problem of the significant impact of sample data quality on the robustness of deep learning models for underwater target recognition. The method innovatively divides error samples into difficult-to-improve samples and easy-to-improve samples and proposes an adversarial training method combined with classification conditions. At the same time, the method uses a weighted index of model accuracy to combine adversarial training models with feature compensation methods, further improving the robustness of deep learning models for underwater target recognition tasks. Finally, the method is validated on an underwater dataset, and the results show that the proposed method improves the robustness of deep learning models used for underwater target recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availibility Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  2. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Preprint at arXiv:1409.1556

  3. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 770–778

  4. Dietterich TG (2017) Steps toward robust artificial intelligence. AI Mag 38(3):3–24

    Google Scholar 

  5. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Erhan I, Erhan R (2014) Intriguing properties of neural networks, 2nd International Conference on Learning Representations, ICLR

  6. Chen YF, Mao XF, Li YH, He Y, Xue H (2019) Ai security-research and application on adv ersarial example. Journal og Information Security Research 5(11):1000–1007

    Google Scholar 

  7. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples, 3rd International Conference on Learning Representations, ICLR

  8. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks, 6th International Conference on Learning Representations, ICLR

  9. Andrew I, Shibani S, Dimitris T, Logan E, Brandon T, Aleksander M (2019) Adversarial examples are not bugs, they are features. Proceedings of the 33rd International Conference on Neural Information Processing Systems

  10. Shi BF, Zhang DH, Dai Q, Zhu ZX, Mu YD, Wang JD (2020) Informative dropout for robust representation learning: A shape-bias perspective, 37th International Conference on Machine Learning, ICML2020 8787–8798

  11. Wang HH, Wu XD, Huang ZY, Xing EP (2020) High-frequency component helps explain the generalization of convolutional neural networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 8681–8691

  12. Yin D, Lopes RG, Shlens J, Cubuk ED, Gilmer J (2019) A fourier perspective on model robustness in computer vision. Adv Neural Inf Process Syst 32:13276–13286

    Google Scholar 

  13. Hendrycks D, Dietterich T (2019) Benchmarking neural network robustness to common corruptions and perturbations. International Conference on Learning Representations 49–56

  14. Ford N, Gilmer J, Carlini N, Cubuk D (2019) Adversarial examples are a natural consequence of test error in noise. International Conference on Machine Learning,PMLR 2280–2289

  15. Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: A review. Int J Autom Comput 17(2):151–178

    Article  Google Scholar 

  16. Dong YP, Fu QA, Yang X, Pang TY, Su H, Xiao J, ZH, Zhu (2020) Benchmarking adversarial robustness on image classification. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 318-328

  17. Metzen JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. Preprint at arXiv:1702.04267

  18. Yang P, Chen CJ, Hsieh CJ, Wang LJ, Michael IJ (2020) Ml-loo: Detecting adversarial examples with feature attribution. Proceedings of the AAAI Conference on Artificial Intelligence 34(4):6639–6647

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Basic Research Project (JCKY2022203B001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongbin Wang.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Ethical and informed consent for data used

Ethical and informed consent for data used.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, M., Wang, J., Wang, H. et al. The error sample feature compensation method for improving the robustness of underwater classification and recognition models. Appl Intell 54, 7201–7212 (2024). https://doi.org/10.1007/s10489-024-05397-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05397-y

Keyword