Skip to main content

Highlight Removal from a Single Image Based on a Prior Knowledge Guided Unsupervised CycleGAN

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2023)

Abstract

Highlights widely exist in many objects, such as the optical images of high-gloss leather, glass, plastic, metal parts, and other mirror-reflective objects. It is difficult to directly apply optical measurement techniques, such as object detection, intrinsic image decomposition, and tracking which are suitable for objects with diffuse reflection characteristics. In this paper, we proposed a specular-to-diffuse-reflection image conversion network based on improved CycleGAN to automatically remove image highlights. It does not require paired training data, and the experimental results verify the effectiveness of our method. There are two main contributions for this framework. On one hand, we proposed a confidence map based on independent average values as the initial value to solve the slow convergence problem of the network due to the lack of a strict mathematical definition for distinguishing specular reflection components from diffuse reflection components. On the other hand, we designed a logarithm-based transformation method generator which made the specular reflection and diffuse reflection components comparable. It could solve the anisotropy problem in the optimization process. This problem was caused by the fact that the peak specular reflection on the surface of a specular object was much larger than the value of the off-peak diffuse reflection. We also compared our method with the latest methods. It was found that the SSIM and PSNR values of our proposed algorithm were significantly improved, and the comparative experimental results showed that the proposed algorithm significantly improves the image conversion quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Guirong, L., Jingfan, T., Ming, J.: Research on image highlight removal based on fast bilateral filtering. Comput. Eng. Appl. 10, 176–179 (2014)

    Google Scholar 

  2. Yasuhiro, A., Okatani, T.: Separation of reflection components by sparse non-negative matrix factorization. Computer Vision–ACCV 2014: 12th Asian Conference on Computer Vision, Singapore, Singapore, November 1–5, 2014, Revised Selected Papers, Part V 12. Springer, Cham (2015)

    Google Scholar 

  3. Jie, G., Zhou, Z., Wang, L.: Single image highlight removal with a sparse and low-rank reflection model. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  4. Yang, Q., Tang, J., Ahuja, N.: Efficient and robust specular highlight removal. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1304–1311 (2014)

    Article  Google Scholar 

  5. Duan, G., et al.: Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images. ACM Trans. Graph. 38.4, 134–1 (2019)

    Google Scholar 

  6. Gang, F., et al.: A multi-task network for joint specular highlight detection and removal. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  7. Wu, Z., et al.: Joint specular highlight detection and removal in single images via Unet-Transformer. Comput. Visual Media 9.1, 141–154 (2023)

    Google Scholar 

  8. Wu, S., et al.: Specular-to-diffuse translation for multi-view reconstruction. In: Proceedings of the European conference on computer vision (ECCV) (2018)

    Google Scholar 

  9. Yang, J., et al.: Using deep learning to detect defects in manufacturing: a comprehensive survey and current challenges. Materials 13.24, 5755 (2020)

    Google Scholar 

  10. Tamás, C., et al.: Visual-based defect detection and classification approaches for industrial applications—a survey. Sensors 20.5, 1459 (2020)

    Google Scholar 

  11. Kahraman, Y., Durmuşoğlu, A.: Deep learning-based fabric defect detection: a review. Text. Res. J. 93(5–6), 1485–1503 (2023)

    Article  Google Scholar 

  12. Niu, S., et al.: Defect image sample generation with GAN for improving defect recognition. IEEE Trans. Autom. Sci. Eng. 17.3, 1611–1622 (2020)

    Google Scholar 

  13. Mark, B., et al.: Nerd: neural reflectance decomposition from image collections. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)

    Google Scholar 

  14. Partha, D., Karaoglu, S., Gevers, T.: PIE-Net: photometric invariant edge guided network for intrinsic image decomposition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  15. Stamatios, G., et al.: Delight-net: Decomposing reflectance maps into specular materials and natural illumination. arXiv preprint arXiv:1603.08240 (2016)

  16. Song, S., Funkhouser, T.: Neural illumination: lighting prediction for indoor environments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  17. Li, Z., et al.: Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  18. Guo, Z., Shao, M., Li, S.: Image-to-image translation using an offset-based multi-scale codes GAN encoder. Visual Comput. 1–17 (2023)

    Google Scholar 

  19. Chen, M., et al.: Cycle-attention-derain: unsupervised rain removal with CycleGAN. Visual Comput. 1–13 (2023)

    Google Scholar 

  20. Yi, Z., et al.: Dualgan: unsupervised dual learning for image-to-image translation. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  21. Kim, T., et al.: Learning to discover cross-domain relations with generative adversarial networks. In: International Conference on Machine Learning. PMLR (2017)

    Google Scholar 

  22. Zhao, Y., et al.: Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network. EGSR (DL) (2020)

    Google Scholar 

  23. He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  24. Long, J., Evan, S., Trevor, D.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  25. Huang, G., et al.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  26. Zhang, Z., et al. “A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution. IEEE Trans. Med. Imag. 37.6, 1407–1417 (2018)

    Google Scholar 

  27. Wojciech, M.: A data-driven reflectance model. Diss. Massachusetts Institute of Technology (2003)

    Google Scholar 

  28. Sun, T., Jensen, H.W., Ramamoorthi, R.: Connecting measured brdfs to analytic brdfs by data-driven diffuse-specular separation. ACM Trans. Graph. (TOG) 37.6, 1–15 (2018)

    Google Scholar 

  29. Lee, H.-Y., et al.: Diverse image-to-image translation via disentangled representations. In: Proceedings of the European conference on computer vision (ECCV) (2018)

    Google Scholar 

Download references

Acknowledgment

This research was made possible by the financial support of the Educational Commission of Hubei Province of China (Grant No. D20211701).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, Y. et al. (2024). Highlight Removal from a Single Image Based on a Prior Knowledge Guided Unsupervised CycleGAN. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14495. Springer, Cham. https://doi.org/10.1007/978-3-031-50069-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50069-5_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50068-8

  • Online ISBN: 978-3-031-50069-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics