Skip to main content
Log in

Towards domain adaptation underwater image enhancement and restoration

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and materials

The datasets generated during and/or analysed during the current study are not publicly available due to the fact that we do not have permission to make them public, but are available from the corresponding author on reasonable request.

References

  1. Cai, G., Zhu, Y., Wu, Y., et al.: A multimodal transformer to fuse images and metadata for skin disease classification. Visual Comput. 1–13 (2022)

  2. Chang, W. L., Wang, H. P., Peng, W. H., et al.: All about structure: adapting structural information across domains for boosting semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, Long Beach, USA, pp. 1900–1909 (2019)

  3. Chen, L.C., Papandreou, G., Kokkinos, I., et al.: Deeplab: semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  4. Chu, C., Zhmoginov, A., Sandler, M.: Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950 (2017)

  5. Dou, Q., Ouyang, C., Chen, C., et al.: Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss. In: IJCAI International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence, Stockholm, Sweden, p. 691, (2018)

  6. Fabbri, C., Islam, M. J., Sattar, J.: Enhancing underwater imagery using generative adversarial networks. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 7159–7165 (2018)

  7. Febin, I., Jidesh, P.: Despeckling and enhancement of ultrasound images using non-local variational framework. Visual Comput. 1–14 (2022)

  8. Galdran, A., Pardo, D., Picón, A., et al.: Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 26, 132–145 (2015)

    Article  Google Scholar 

  9. Guo, J., Li, C., Guo, C., et al.: Research progress of underwater image enhancement and restoration methods. J. Image Graph. 22(3), 273–287 (2017)

    MathSciNet  Google Scholar 

  10. Han, J., Shoeiby, M., Malthus, T., et al.: Underwater image restoration via contrastive learning and a real-world dataset. Remote Sens. 14(17), 4297 (2022)

    Article  Google Scholar 

  11. Hou, M., Liu, R., Fan, X., et al.: Joint residual learning for underwater image enhancement. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 4043–4047. IEEE, Athens, Greece (2018)

  12. Hsiao, T.Y., Chang, Y.C., Chou, H.H., et al.: Filter-based deep-compression with global average pooling for convolutional networks. J. Syst. Architect. 95, 9–18 (2019)

    Article  Google Scholar 

  13. Islam, M.J., Xia, Y., Sattar, J.: Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 5(2), 3227–3234 (2020)

    Article  Google Scholar 

  14. Jaffe, J.S.: Computer modeling and the design of optimal underwater imaging systems. IEEE J. Oceanic Eng. 15(2), 101–111 (1990)

    Article  Google Scholar 

  15. Jebadass, J.R., Balasubramaniam, P.: Low contrast enhancement technique for color images using interval-valued intuitionistic fuzzy sets with contrast limited adaptive histogram equalization. Soft. Comput. 26(10), 4949–4960 (2022)

    Article  Google Scholar 

  16. Jiang, Q., Gu, Y., Li, C., et al.: Underwater image enhancement quality evaluation: benchmark dataset and objective metric. IEEE Trans. Circ. Syst. Video Technol. 32(9), 5959–5974 (2022)

    Article  Google Scholar 

  17. Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)

    Article  Google Scholar 

  18. Kang, Y., Jiang, Q., Li, C., et al.: A perception-aware decomposition and fusion framework for underwater image enhancement. IEEE Trans. Circ. Syst. Video Technol. 33(3), 988–1002 (2022)

    Article  Google Scholar 

  19. Li, J., Skinner, K.A., Eustice, R.M., et al.: Watergan: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  20. Li, C., Guo, C., Ren, W., et al.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  Google Scholar 

  21. Li, C., Anwar, S., Porikli, F.: Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recogn. 98, 107038 (2020)

    Article  Google Scholar 

  22. Liang, Z., Ding, X., Wang, Y., et al.: Gudcp: generalization of underwater dark channel prior for underwater image restoration. IEEE Trans. Circ. Syst. Video Technol. (2021)

  23. Lin, T. Y., Dollár, P., Girshick, R., et al. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

  24. Lin, T. Y., Maire, M., Belongie, S., et al. Microsoft coco: common objects in context. In: European Conference on Computer Vision. Springer, pp. 740–755 (2014)

  25. Liu, Y.C., Chan, W.H., Chen, Y.Q.: Automatic white balance for digital still camera. IEEE Trans. Consum. Electron. 41(3), 460–466 (1995)

    Article  Google Scholar 

  26. Liu, X., Gao, Z., Chen, B.M.: Mlfcgan: multilevel feature fusion-based conditional gan for underwater image color correction. IEEE Geosci. Remote Sens. Lett. 17(9), 1488–1492 (2019)

    Article  Google Scholar 

  27. Liu, R., Fan, X., Zhu, M., et al.: Real-world underwater enhancement: challenges, benchmarks, and solutions under natural light. IEEE Trans. Circ. Syst. Video Technol. 30(12), 4861–4875 (2020)

    Article  Google Scholar 

  28. Ma, N., Zhang, X., Zheng, H. T., et al. Shufflenet v2: practical guidelines for efficient CNN architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV). Springer Science, Munich, Germany, pp. 116–131 (2018)

  29. Mahapatra, P.K., Ganguli, S., Kumar, A.: A hybrid particle swarm optimization and artificial immune system algorithm for image enhancement. Soft. Comput. 19(8), 2101–2109 (2015)

    Article  Google Scholar 

  30. Morikawa, C., Kobayashi, M., Satoh, M., et al.: Image and video processing on mobile devices: a survey. Vis. Comput. 37(12), 2931–2949 (2021)

    Article  Google Scholar 

  31. Panetta, K., Gao, C., Agaian, S.: Human-visual-system-inspired underwater image quality measures. IEEE J. Oceanic Eng. 41(3), 541–551 (2015)

    Article  Google Scholar 

  32. Pecho, O.E., Ghinea, R., Alessandretti, R., et al.: Visual and instrumental shade matching using cielab and ciede2000 color difference formulas. Dent. Mater. 32(1), 82–92 (2016)

    Article  Google Scholar 

  33. Ren, S., He, K., Girshick, R., et al.: Faster r-cnn: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28, (2015)

  34. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Springer, Boston (2015)

    Google Scholar 

  35. Sun, H., Lin, L., Liu, N., et al. Robust ensembling network for unsupervised domain adaptation. In: Pacific Rim International Conference on Artificial Intelligence. Springer, pp. 530–543 (2021)

  36. Tian, C., Xu, Y., Li, Z., et al.: Attention-guided CNN for image denoising. Neural Netw. 124, 117–129 (2020)

    Article  Google Scholar 

  37. Urban, P., Rosen, M.R., Berns, R.S., et al.: Embedding non-euclidean color spaces into euclidean color spaces with minimal isometric disagreement. JOSA A 24(6), 1516–1528 (2007)

    Article  Google Scholar 

  38. Wang, S., Ma, K., Yeganeh, H., et al.: A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 22(12), 2387–2390 (2015)

    Article  Google Scholar 

  39. Wang, Y., Guo, J., Gao, H., et al.: Uiec\(\hat{2}\)-net: Cnn-based underwater image enhancement using two color space. Signal Process. Image Commun. 96, 116250 (2021)

    Article  Google Scholar 

  40. Yamanaka, J., Kuwashima, S., Kurita, T.: Fast and accurate image super resolution by deep CNN with skip connection and network in network. In: International Conference on Neural Information Processing. Springer, pp. 217–225 (2017)

  41. Yang, M., Sowmya, A.: An underwater color image quality evaluation metric. IEEE Trans. Image Process. 24(12), 6062–6071 (2015)

    Article  MathSciNet  Google Scholar 

  42. Yu, S.Y., Zhu, H.: Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Trans. Circ. Syst. Video Technol. 29(1), 28–37 (2017)

    Article  Google Scholar 

  43. Zhang, D., Zhou, J., Zhang, W., et al.: Rex-net: a reflectance-guided underwater image enhancement network for extreme scenarios. Expert Syst. Appl. 120842 (2023)

  44. Zhang, W., Zhuang, P., Sun, H.H., et al.: Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Trans. Image Process. 31, 3997–4010 (2022)

    Article  Google Scholar 

  45. Zhou, J., Li, B., Zhang, D., et al.: Ugif-net: an efficient fully guided information flow network for underwater image enhancement. IEEE Trans. Geosci. Remote Sens. (2023)

  46. Zhou, J., Liu, Q., Jiang, Q., et al.: Underwater camera: improving visual perception via adaptive dark pixel prior and color correction. Int. J. Comput. Vis. 1–19 (2023)

  47. Zhou, J., Wang, Y., Li, C., et al.: Multicolor light attenuation modeling for underwater image restoration. IEEE J. Oceanic Eng. (2023)

  48. Zhou, Y., Wu, Q., Yan, K., et al.: Underwater image restoration using color-line model. IEEE Trans. Circ. Syst. Video Technol. 29(3), 907–911 (2018)

    Article  Google Scholar 

  49. Zhuang, P., Wu, J., Porikli, F., et al.: Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Trans. Image Process. 31, 5442–5455 (2022)

    Article  Google Scholar 

Download references

Funding

The project supported by the National Natural Science Foundation of China (No: 61871124 and 61876037), The national defense Pre-Research foundation of China, by the fund of Science and Technology on Sonar Laboratory (No: 6142109KF201806), by the Stable Supporting Fund of Acoustic Science and Technology Laboratory (No: JCKYS2019604SSJSSO12).

Author information

Authors and Affiliations

Authors

Contributions

CY and L J wrote the main manuscript text, ZL conducted experiments, and JH prepared figures. All authors reviewed the manuscript.

Corresponding author

Correspondence to Longyu Jiang.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare relevant to this article’s content

Ethical approval

This article does not contain any studies with animals performed by any of the authors.

Additional information

Communicated by P. Pala.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, C., Jiang, L., Li, Z. et al. Towards domain adaptation underwater image enhancement and restoration. Multimedia Systems 30, 62 (2024). https://doi.org/10.1007/s00530-023-01246-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-023-01246-z

Keywords

Navigation