Abstract
Synthetic aperture radar (SAR) has the potential to operate effectively in all weather conditions, making it a desirable tool in various fields. However, the inability of untrained individuals to visually identify ground cover in SAR images poses challenges in practical applications such as environmental monitoring, disaster assessment, and land management. To address this issue, generative adversarial networks (GANs) have been used to transform SAR images into optical images. This technique is commonly referred to as SAR to optical image translation. Despite its common use, the traditional methods often generate optical images with color distortion and blurred contours. Therefore, a novel approach utilizing conditional generative adversarial networks (CGANs) is introduced as an enhanced method for SAR-to-optical image translation. A style-based calibration module is incorporated, which learns the style features of the input SAR images and matches them with the style of real optical images to achieve color calibration, thereby minimizing the differences between the generated output and real optical images. Furthermore, a multi-scale strategy is incorporated in the discriminator. Each branch of the multi-scale discriminator is dedicated to capturing texture and edge features at different scales, thereby enhancing the texture and edge information of the image at both local and global levels. Experimental results demonstrate that the proposed approach surpasses existing image translation techniques in terms of both visual effects and evaluation metrics.
This work was supported by the National Natural Science Foundation of China (No. 62006188, No. 62103311); Chinese Universities Scientific Fund (No. 2452022341); the QinChuangyuan High-Level Innovation and Entrepreneurship Talent Program of Shaanxi (No. 2021QCYRC4-50).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ardila, J., Laurila, P., Kourkouli, P., Strong, S.: Persistent monitoring and mapping of floods globally based on the iceye sar imaging constellation. In: IGARSS 2022, pp. 6296–6299 (2022)
Chen, R., Huang, W., Huang, B., Sun, F., Fang, B.: Reusing discriminators for encoding: towards unsupervised image-to-image translation. In: CVPR 2020, pp. 8165–8174 (2020)
Fang, Y., Deng, W., Du, J., Hu, J.: Identity-aware CycleGAN for face photo-sketch synthesis and recognition. PR 102, 107249 (2020)
Feng, L., Wang, J.: Research on image denoising algorithm based on improved wavelet threshold and non-local mean filtering. In: ICSIP 2021, pp. 493–497 (2021)
Fu, S., Xu, F., Jin, Y.Q.: Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks. Sci. China Inf. Sci. 64, 122301 (2021)
Gonzalez-Audicana, M., Lopez-Saenz, S., Arias, M., Sola, I., Alvarez-Mozos, J.: Sentinel-1 and sentinel-2 based crop classification over agricultural regions of navarre (spain). In: IGARSS 2021, pp. 5977–5980 (2021)
Goodfellow, I.J., et al.: Generative adversarial networks (2014). arXiv:1406.2661
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks (2018), arXiv:1611.07004
Lee, H., Kim, H.E., Nam, H.: Srm: a style-based recalibration module for convolutional neural networks. In: ICCV 2019, pp. 1854–1862 (2019)
Merkle, N., Auer, S., Müller, R., Reinartz, P.: Exploring the potential of conditional adversarial networks for optical and SAR image matching. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 11(6), 1811–1820 (2018)
Mirza, M., Osindero, S.: Conditional generative adversarial nets (2014). arXiv:1411.1784
Naik, D.A., Sangeetha, V., Sandhya, G.: Generative adversarial networks based method for generating photo-realistic super resolution images. In: ETI 4.0 2021, pp. 1–6 (2021)
Niu, X., Yang, D., Yang, K., Pan, H., Dou, Y.: Image translation between high-resolution remote sensing optical and SAR data using conditional GAN. In: PCM 2018, pp. 245–255 (2018)
Vishwakarma, D.K.: Comparative analysis of deep convolutional generative adversarial network and conditional generative adversarial network using hand written digits. In: ICICCS 2020, pp. 1072–1075 (2020)
Prakash, C.D., Karam, L.J.: It gan do better: GAN-Based detection of objects on images with varying quality. IEEE Trans. Image Process. 30, 9220–9230 (2021)
Schmitt, M., Hughes, L.H., Zhu, X.X.: The SEN1-2 dataset for deep learning in SAR-optical data fusion (2018). arXiv:1807.01569
Wang, H., Zhang, Z., Hu, Z., Dong, Q.: SAR-to-optical image translation with hierarchical latent features. IEEE Trans. Geosci. Remote Sens. 60, 1–12 (2022)
Wang, L., et al.: SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access 7, 129136–129149 (2019)
Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: CVPR 2018, pp. 8798–8807 (2018)
Wang, Z., Ma, Y., Zhang, Y.: Hybrid cgan: coupling global and local features for sar-to-optical image translation. IEEE Trans. Geosci. Remote Sens. 60, 1–16 (2022)
Wu, M., He, Z., Zhao, X., Zhang, S.: Generative adversarial networks-based image compression for consumer photo storage. In: GCCE 2019, pp. 333–334 (2019)
Xiao, M., He, Z., Lou, A., Li, X.: Center-to-corner vector guided network for arbitrary-oriented ship detection in synthetic aperture radar images. In: ICGMRS 2022, pp. 293–297 (2022)
Yang, H., Zhang, J.: Self-attentive semantic segmentation model based on generative adversarial network. In: AICIT 2022, pp. 1–5 (2022)
Zhan, T., Gong, M., Jiang, X., Zhao, W.: Transfer learning-based bilinear convolutional networks for unsupervised change detection. Remote Sens. Lett. 19, 1–5 (2022)
Zhang, E., et al.: Attention-embedded triple-fusion branch CNN for hyperspectral image classification. Remote Sens. 15(8) (2023)
Zhao, L., Jiao, Y., Chen, J., Zhao, R.: Image style transfer based on generative adversarial network. In: ICCNEA 2021, pp. 191–195 (2021)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2020). arXiv:1703.10593
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhan, T., Bian, J., Yang, J., Dang, Q., Zhang, E. (2024). Improved Conditional Generative Adversarial Networks for SAR-to-Optical Image Translation. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_23
Download citation
DOI: https://doi.org/10.1007/978-981-99-8462-6_23
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8461-9
Online ISBN: 978-981-99-8462-6
eBook Packages: Computer ScienceComputer Science (R0)