Skip to main content
Log in

GSGN-TSIIG: a gradient semantics generation network-based two-stage image inpainting generator for enhanced image restoration

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Image inpainting is the recent hot research direction in the domain of artificial intelligence due to its significance in image restoration. Even though the traditional image inpainting approaches have shown promising outcomes, they are prone to ambiguous artifacts, particularly in high semantic and boundary areas. To resolve these challenges, we propose an efficient image inpainting model termed Gradient Semantics Generation Network-based Two-stage Image Inpainting Generator (GSGN-TSIIG) for comprehensive image restoration. Firstly, we design GSGN a comprehensive gradient learning framework by employing a convolutional autoencoder as a backbone for accurately determining gradient semantics in the hole region of the corrupted images. Secondly, we develop TSIIG an image inpainting network that leverages the significance of structural and textural generators for repairing the missing pixels and generating better image inpainting outcomes. Furthermore, our model leverages image data collected from diverse sources, including the Places-2_MIT dataset, CelebA dataset, Paris Streetview dataset, and ImageNet dataset for comprehensive performance assessment. Experimental results, including essential measures and quantitative analyses, demonstrate the efficacy of our approach by achieving a PSNR of 35.2 dB and a mean error of 0.01. Notably, comparative evaluations underscore the superior performance of our proposed model over existing approaches, establishing its potential for robust image restoration applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and material

No datasets were generated or analysed during the current study.

References

  1. Su, J., Xu, B., Yin, H.: A survey of deep learning approaches to image restoration. Neurocomputing 487, 46–65 (2022)

    Article  MATH  Google Scholar 

  2. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5728–5739 (2022)

  3. Mou, C., Wang, Q., Zhang, J.: Deep generalized unfolding networks for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 17399–17410 (2022)

  4. Dudhane, A., Zamir, S.W., Khan, S., Khan, F.S., Yang, M.H.: Burst image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5759–5768 (2022)

  5. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 17683–17693 (2022)

  6. Ali, A.M., Benjdira, B., Koubaa, A., El-Shafai, W., Khan, Z., Boulila, W.: Vision transformers in image restoration: a survey. Sensors. 23(5), 2385 (2023)

    Article  Google Scholar 

  7. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Learning enriched features for fast image restoration and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 1934–1948 (2022)

    Article  MATH  Google Scholar 

  8. Fu, J., Yan, L., Peng, Y., Zheng, K., Gao, R., Ling, H.: Low-light image enhancement based on brightness attention mechanism generative adversarial networks. Multimed. Tools Appl. 83(4), 10341–10365 (2024)

    Article  Google Scholar 

  9. Yan, L., Ye, Y., Wang, C., Sun, Y.: LocMix: local saliency-based data augmentation for image classification. SIViP 18(2), 1383–1392 (2024)

    Article  MATH  Google Scholar 

  10. Ma, X., Zhou, X., Huang, H., Jia, G., Wang, Y., Chen, X., Chen, C.: Uncertainty-aware image inpainting with adaptive feedback network. Expert Systems with Applications. 121148 (2023)

  11. Lee, E., Kim, J., Kim, J., Kim, T.H.: Restore from restored: single-image inpainting. arXiv preprint arXiv:2102.08078 (2021)

  12. Chen, Y., Xia, R., Zou, K., Yang, K.: RNON: image inpainting via repair network and optimization network. Int. J. Mach. Learn. Cybernet. 1–17 (2023)

  13. Chen, Y., Xia, R., Zou, K., Yang, K.: FFTI: image inpainting algorithm via features fusion and two-steps inpainting. J. Vis. Commun. Image Represent. 91, 103776 (2023)

    Article  Google Scholar 

  14. Fang, W., Gu, E., Yi, W., Wang, W., Sheng, V.S.: A new method of image restoration technology based on WGAN. Comput. Syst. Sci. Eng. 41(2), 689–698 (2022)

    Article  MATH  Google Scholar 

  15. Agarwal, C., Bhatnagar, C.: Unmasking the potential: evaluating image inpainting techniques for masked face reconstruction. Multimed. Tools Appl. 83(1), 893–918 (2024)

    Article  MATH  Google Scholar 

  16. Chen, Y., Liu, L., Tao, J., Xia, R., Zhang, Q., Yang, K., Xiong, J., Chen, X.: The improved image inpainting algorithm via encoder and similarity constraint. Vis. Comput. 37, 1691–1705 (2021)

    Article  MATH  Google Scholar 

  17. Quan, W., Zhang, R., Zhang, Y., Li, Z., Wang, J., Yan, D.M.: Image inpainting with local and global refinement. IEEE Trans. Image Process. 31, 2405–2420 (2022)

    Article  MATH  Google Scholar 

  18. Bedi, P., Gole, P.: Plant disease detection using hybrid model based on convolutional autoencoder and convolutional neural network. Artif. Intell. Agric. 5, 90–101 (2021)

    MATH  Google Scholar 

  19. Jam, J., Kendrick, C., Drouard, V., Walker, K., Hsu, G.S., Yap, M.H.: R-mnet: A perceptual adversarial network for image inpainting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2714–2723 (2021)

  20. Li, Y., Zhang, K., Liang, J., Cao, J., Liu, C., Gong, R., Zhang, Y., Tang, H., Liu, Y., Demandolx, D., Ranjan, R.: Lsdir: A large scale dataset for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1775–1787 (2023)

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All agreed on the content of the study. MS and PDS collected all the data for analysis. MS agreed on the methodology. MS and PDS completed the analysis based on agreed steps. Results and conclusions are discussed and written together. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Malathy Shanmugam.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Human and animal rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shanmugam, M., Sivakumar, P.D. GSGN-TSIIG: a gradient semantics generation network-based two-stage image inpainting generator for enhanced image restoration. SIViP 19, 111 (2025). https://doi.org/10.1007/s11760-024-03653-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03653-9

Keywords

Navigation