Skip to main content
Log in

Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Image colorization technique is used to colorize the gray-level image or single-channel image, which is a very significant and challenging task in image processing, especially the colorization of remote sensing images. This paper proposes a new method for coloring remote sensing images based on deep convolution generation adversarial network. The adopted generator model is a symmetrical structure using the principle of auto-encoder, and a multi-scale convolutional module is specially designed to introduce into the generator model. Thus, the proposed generator can enable the whole model to retain more image features in the process of up-sampling and down-sampling. Meanwhile, the discriminator uses residual neural network 18 that can compete with the generator, so that the generator and discriminator can effectively optimize each other. In the proposed method, the color space transformation technique is first utilized to convert remote sensing images from RGB to YUV. Then, the Y channel (a gray-level image) is used as the input of the neural network model to predict UV channels. Finally, the predicted UV channels are concatenated with the original Y channel as a whole YUV that is then transformed into RGB space to get the final color image. Experiments are conducted to test the performance of different image colorization methods, and the results show that the proposed method has good performance in both visual quality and objective indexes on the colorization of remote sensing image.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Todo, H., Yatagawa, T., Sawayama, M., et al.: Image-based translucency transfer through correlation analysis over multi-scale spatial color distribution. Vis. Comput. 35(2), 1–12 (2019)

    Google Scholar 

  2. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Trans. Graph. (TOG) 23(3), 689–694 (2004)

    Article  Google Scholar 

  3. Nida, N., Sharif, M., Khan, M.U.G., et al.: A framework for automatic colorization of medical imaging. IIOAB J. 7, 202–209 (2016)

    Google Scholar 

  4. Limmer, M., Lensch, H.P.A.: Infrared colorization using deep convolutional neural networks. In: 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 61–68. IEEE (2016)

  5. Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: benchmark and state of the art. Proc. IEEE 105(10), 1865–1883 (2017)

    Article  Google Scholar 

  6. Li, Y., Xie, T., Wang, P., et al.: Joint spectral-spatial hyperspectral image classification based on hierarchical subspace switch ensemble learning algorithm. Appl. Intell. 48(11), 4128–4148 (2018)

    Article  Google Scholar 

  7. Hua, W., Wang, R., Zeng, X., et al.: Compressing repeated content within large-scale remote sensing images. Vis. Comput. 28(6–8), 755–764 (2012)

    Article  Google Scholar 

  8. Reinhard, E., Ashikhmin, M., Gooch, B., et al.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2002)

    Google Scholar 

  9. Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 77–280 (2002)

  10. Zong, G., Chen, Y., Cao, G, et al.: Fast image colorization based on local and global consistency. In: International Symposium on Computational Intelligence and Design (ISCID), vol. 1, pp. 366–369. IEEE (2015)

  11. Reinhard, E., Adhikhmin, M., Gooch, B., et al.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)

    Article  Google Scholar 

  12. Ironi, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: Rendering Techniques, pp. 201–210 (2005)

  13. Charpiat, G., Hofmann, M., Schölkopf, B.: Automatic image colorization via multimodal predictions. In: European Conference on Computer Vision (ECCV), pp. 126–139. Springer, Berlin (2008)

  14. Gauge, C., Sasi, S.: tomated colorization of grayscale images using texture descriptors and a modified fuzzy c-means clustering (2012)

  15. Fang, F., Wang, T., Zeng, T., et al.: A superpixel-based variational model for image colorization. IEEE Trans. Vis. Comput. Graph. (Early Access) (2019)

  16. Li, B., Lai, Y.K., Rosin, P.L.: Example-based image colorization via automatic feature selection and fusion. Neurocomputing 266, 687–698 (2017)

    Article  Google Scholar 

  17. Li, B., Lai, Y.K., John, M., et al.: Automatic example-based image colorization using location-aware cross-scale matching. IEEE Trans. Image Process. 28(9), 4606–4619 (2019)

    Article  MathSciNet  Google Scholar 

  18. Varga, D., Sziranyi, T.: Twin deep convolutional neural network for example-based image colorization. In: International Conference on Computer Analysis of Images and Patterns, pp. 184–195. Springer, Cham (2017)

  19. Gravey, M., Rasera, L.G., Mariethoz, G.: Analogue-based colorization of remote sensing images using textural information. ISPRS J. Photogr. Remote Sens. 147, 242–254 (2019)

    Article  Google Scholar 

  20. Huang, Y.C., Tung, Y.S., Chen, J.C., et al.: An adaptive edge detection based colorization algorithm and its applications. In: 13th Annual ACM International Conference on Multimedia, pp. 351–354. ACM (2005)

  21. Luan, Q., Wen, F., Cohen-Or, D, et al.: Natural image colorization. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques. Eurographics Association, pp. 309-320 (2007)

  22. Chen, C., Xu, Y., Yang, X.: User tailored colorization using automatic scribbles and hierarchical features. Digit. Signal Process. 87, 155–165 (2019)

    Article  Google Scholar 

  23. Li, W., Meng, P., Hong, Y., Cui, X.: Using deep learning to preserve data confidentiality. Appl. Intell. 50(2), 341–353 (2020)

    Article  Google Scholar 

  24. Maryam, K., Nasrollah, M., Charkari, F.G.: Unsupervised representation learning based on the deep multi-view ensemble learning. Appl. Intell. 20, (2019)

  25. Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 415–423 (2015)

  26. Cheng, Z., Yang, Q., Sheng, B.: Colorization using neural network ensemble. IEEE Trans. Image Process. 26(11), 5491–5505 (2017)

    Article  MathSciNet  Google Scholar 

  27. Richart, M., Visca, J., Baliosian, J.: Image colorization with neural networks. In: 2017 Workshop of Computer Vision (WVC), pp. 55–60. IEEE (2017)

  28. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

  29. Nazeri, K., Ng, E., Ebrahimi, M.: Image Colorization with Generative Adversarial Networks (2018). arXiv:1803.05400

  30. Suarez, P.L., Sappa, A.D., Vintimilla, B.X.: Infrared image colorization based on a triplet DCGAN architecture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 18–23 (2017)

  31. Dong, J., Yin, R., Sun, X., et al.: Inpainting of remote sensing SST images with deep convolutional generative adversarial network. IEEE Geosci. Remote Sens. Lett. 16(2), 173–177 (2018)

    Article  Google Scholar 

  32. Szegedy, C., Ioffe, S., Vanhoucke, V., et al.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 4278–4284 (2017)

  33. Liu, H., Fu, Z., Han, J., et al.: Single satellite imagery simultaneous super-resolution and colorization using multi-task deep neural networks. J. Vis. Commun. Image Represent. 53, 20–30 (2018)

    Article  Google Scholar 

  34. Cer, D., Yang, Y., Kong, S, et al.: Universal Sentence Encoder (2018). arXiv:1803.11175

  35. Zhang, S., Han, Z., Lai, Y.K., et al.: Stylistic scene enhancement GAN: mixed stylistic enhancement generation for 3D indoor scenes. Vis. Comput. 35(6), 1157–1169 (2019)

    Article  Google Scholar 

  36. https://github.com/zeruniverse/neural-colorization. Accessed 17 May 2019

  37. Radford, A., Metz, L., Chintala, S.: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (2015). arXiv:1511.06434

  38. Zeng, K., Ding, S., Jia, W.: Single image super-resolution using a polymorphic parallel CNN. Appl. Intell. 49(1), 292–300 (2019)

    Article  Google Scholar 

  39. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). arXiv:1701.07875

  40. Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 2794–2802. IEEE (2017)

  41. Ali, A., Sinan, K., Yusuf, S.: Deep 3D semantic scene extrapolation. Vis. Comput. 35(2), 271–279 (2019)

    Article  Google Scholar 

  42. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015). arXiv:1512.03385

  43. Hu, H., Xu, L., Zhao, H.: A spherical codebook in YUV color space for moving object detection. Sens. Lett. 10(1–2), 177–189 (2012)

    Article  Google Scholar 

  44. LeCun, Y., Kavukcuoglu, K., Farabet, C., Convolutional networks and applications in vision. In: 2010 IEEE International Symposium on Circuits and Systems, pp. 253–256. IEEE (2010)

  45. Nguyen, T.N., Miyata, K.: Multi-scale region perpendicular local binary pattern: an effective feature for interest region description. Vis. Comput. 31(4), 391–406 (2015)

    Article  Google Scholar 

  46. Kabbai, L., Abdellaoui, M., Douik, A.: Image classification by combining local and global features. Vis. Comput. 35(5), 679–693 (2019)

    Article  Google Scholar 

  47. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: International Conference on Pattern Recognition (ICPR), pp. 2366–2369. IEEE (2010)

  48. Cheggoju, N., Satpute, V.R.: INPAC: INdependent PAss Coding algorithm for robust image data transmission through low SNR channels. Vis. Comput. 34(4), 563–573 (2018)

    Article  Google Scholar 

  49. Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  50. Avanaki, A.N.: Exact global histogram specification optimized for structural similarity. Opt. Rev. 16(6), 613–621 (2009)

    Article  Google Scholar 

  51. Larsson, G., Maire, M., Shakhnarovich, G.: Learning representations for automatic colorization. In: European Conference on Computer Vision (ECCV), pp. 577–593. Springer, Cham (2016)

  52. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color! Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. (TOG) 35(4), 110 (2016)

    Article  Google Scholar 

  53. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: European Conference on Computer Vision (ECCV), pp. 649–666. Springer, Cham (2016)

  54. Isola, P., Zhu, J.Y., Zhou, T., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134 (2017)

  55. He, M., Chen, D., Liao, J., et al.: Deep exemplar-based colorization. ACM Trans. Graph. (TOG) 37(4), 1–16 (2018)

    Google Scholar 

Download references

Acknowledgements

This study is supported by the National Natural Science Foundation of China (No. 61863036). We also thank to the support of China Postdoctoral Science Foundation (Nos. 2020T130564, 2019M653507), Yunnan Province Postdoctoral Science Foundation, Doctoral Candidate Academic Award of Yunnan Province, and Yunnan University’s Research Innovation Fund for Graduate Students (Nos. 2019164, 2019166).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qian Jiang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

figure h
figure i
figure j
figure k

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, M., Jin, X., Jiang, Q. et al. Remote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space. Vis Comput 37, 1707–1729 (2021). https://doi.org/10.1007/s00371-020-01933-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01933-2

Keywords

Navigation