Abstract
Image inpainting is the recent hot research direction in the domain of artificial intelligence due to its significance in image restoration. Even though the traditional image inpainting approaches have shown promising outcomes, they are prone to ambiguous artifacts, particularly in high semantic and boundary areas. To resolve these challenges, we propose an efficient image inpainting model termed Gradient Semantics Generation Network-based Two-stage Image Inpainting Generator (GSGN-TSIIG) for comprehensive image restoration. Firstly, we design GSGN a comprehensive gradient learning framework by employing a convolutional autoencoder as a backbone for accurately determining gradient semantics in the hole region of the corrupted images. Secondly, we develop TSIIG an image inpainting network that leverages the significance of structural and textural generators for repairing the missing pixels and generating better image inpainting outcomes. Furthermore, our model leverages image data collected from diverse sources, including the Places-2_MIT dataset, CelebA dataset, Paris Streetview dataset, and ImageNet dataset for comprehensive performance assessment. Experimental results, including essential measures and quantitative analyses, demonstrate the efficacy of our approach by achieving a PSNR of 35.2 dB and a mean error of 0.01. Notably, comparative evaluations underscore the superior performance of our proposed model over existing approaches, establishing its potential for robust image restoration applications.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11760-024-03653-9/MediaObjects/11760_2024_3653_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11760-024-03653-9/MediaObjects/11760_2024_3653_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11760-024-03653-9/MediaObjects/11760_2024_3653_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11760-024-03653-9/MediaObjects/11760_2024_3653_Fig4_HTML.png)
Similar content being viewed by others
Availability of data and material
No datasets were generated or analysed during the current study.
References
Su, J., Xu, B., Yin, H.: A survey of deep learning approaches to image restoration. Neurocomputing 487, 46–65 (2022)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5728–5739 (2022)
Mou, C., Wang, Q., Zhang, J.: Deep generalized unfolding networks for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 17399–17410 (2022)
Dudhane, A., Zamir, S.W., Khan, S., Khan, F.S., Yang, M.H.: Burst image restoration and enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5759–5768 (2022)
Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 17683–17693 (2022)
Ali, A.M., Benjdira, B., Koubaa, A., El-Shafai, W., Khan, Z., Boulila, W.: Vision transformers in image restoration: a survey. Sensors. 23(5), 2385 (2023)
Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Learning enriched features for fast image restoration and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 1934–1948 (2022)
Fu, J., Yan, L., Peng, Y., Zheng, K., Gao, R., Ling, H.: Low-light image enhancement based on brightness attention mechanism generative adversarial networks. Multimed. Tools Appl. 83(4), 10341–10365 (2024)
Yan, L., Ye, Y., Wang, C., Sun, Y.: LocMix: local saliency-based data augmentation for image classification. SIViP 18(2), 1383–1392 (2024)
Ma, X., Zhou, X., Huang, H., Jia, G., Wang, Y., Chen, X., Chen, C.: Uncertainty-aware image inpainting with adaptive feedback network. Expert Systems with Applications. 121148 (2023)
Lee, E., Kim, J., Kim, J., Kim, T.H.: Restore from restored: single-image inpainting. arXiv preprint arXiv:2102.08078 (2021)
Chen, Y., Xia, R., Zou, K., Yang, K.: RNON: image inpainting via repair network and optimization network. Int. J. Mach. Learn. Cybernet. 1–17 (2023)
Chen, Y., Xia, R., Zou, K., Yang, K.: FFTI: image inpainting algorithm via features fusion and two-steps inpainting. J. Vis. Commun. Image Represent. 91, 103776 (2023)
Fang, W., Gu, E., Yi, W., Wang, W., Sheng, V.S.: A new method of image restoration technology based on WGAN. Comput. Syst. Sci. Eng. 41(2), 689–698 (2022)
Agarwal, C., Bhatnagar, C.: Unmasking the potential: evaluating image inpainting techniques for masked face reconstruction. Multimed. Tools Appl. 83(1), 893–918 (2024)
Chen, Y., Liu, L., Tao, J., Xia, R., Zhang, Q., Yang, K., Xiong, J., Chen, X.: The improved image inpainting algorithm via encoder and similarity constraint. Vis. Comput. 37, 1691–1705 (2021)
Quan, W., Zhang, R., Zhang, Y., Li, Z., Wang, J., Yan, D.M.: Image inpainting with local and global refinement. IEEE Trans. Image Process. 31, 2405–2420 (2022)
Bedi, P., Gole, P.: Plant disease detection using hybrid model based on convolutional autoencoder and convolutional neural network. Artif. Intell. Agric. 5, 90–101 (2021)
Jam, J., Kendrick, C., Drouard, V., Walker, K., Hsu, G.S., Yap, M.H.: R-mnet: A perceptual adversarial network for image inpainting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2714–2723 (2021)
Li, Y., Zhang, K., Liang, J., Cao, J., Liu, C., Gong, R., Zhang, Y., Tang, H., Liu, Y., Demandolx, D., Ranjan, R.: Lsdir: A large scale dataset for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1775–1787 (2023)
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
All agreed on the content of the study. MS and PDS collected all the data for analysis. MS agreed on the methodology. MS and PDS completed the analysis based on agreed steps. Results and conclusions are discussed and written together. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Human and animal rights
This article does not contain any studies with human or animal subjects performed by any of the authors.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Shanmugam, M., Sivakumar, P.D. GSGN-TSIIG: a gradient semantics generation network-based two-stage image inpainting generator for enhanced image restoration. SIViP 19, 111 (2025). https://doi.org/10.1007/s11760-024-03653-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11760-024-03653-9