Skip to main content

Removing Stray-Light for Wild-Field Fundus Image Fusion Based on Large Generative Models

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14557))

Included in the following conference series:

  • 338 Accesses

Abstract

In low-cost wide-field fundus cameras, the built-in lighting sources are prone to generate stray-light nearby, leading to low-quality image regions. To visualize retinal structures clearer, when fusing two images with complementary patterns of different lighting sources, the fused image might still have stray-light phenomenon near the hard fusing boundaries, i.e., typically in the diagonal directions. In this paper, an image enhancement algorithm based on generative adversarial network is proposed to eliminate the stray-light in wide-field fundus fusing images. First, a haze density estimation module is introduced to guide the model to pay attention to more serious stray-light regions. Second, a detail recovery module is introduced to reduce the loss of details caused by stray-light. Finally, a domain discriminator with unsupervised domain adaptation is employed to achieve better performance generalization on clinical data. Experiments show that our method obtains the best results on both public synthesized traditional fundus image dataset EyePACS-K and private wide-field fundus images dataset Retivue. Compared to the SOTA, the PSNR and Structural Similarity on average upon two above datasets are increased by 1.789 dB and 0.021 respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, X., Zhou, F., Xue, B.: Image enhancement using multi scale image features extracted by top-hat transform. Opt. Laser Technol. 44(2), 328–336 (2012)

    Article  Google Scholar 

  2. Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: DehazeNet: an end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016)

    Article  MathSciNet  Google Scholar 

  3. Cao, L., Li, H., Zhang, Y.: Retinal image enhancement using low-pass filtering and \(\alpha \)-rooting. Signal Process. 170, 107445 (2020)

    Article  Google Scholar 

  4. Cheng, P., Lin, L., Huang, Y., Lyu, J., Tang, X.: I-SECRET: importance-guided fundus image enhancement via semi-supervised contrastive constraining. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12908, pp. 87–96. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87237-3_9

    Chapter  Google Scholar 

  5. Fu, H., et al.: Evaluation of retinal image quality assessment networks in different color-spaces. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 48–56. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_6

    Chapter  Google Scholar 

  6. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27 (2014)

    Google Scholar 

  7. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  8. Hore, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: 20th International Conference on Pattern Recognition, pp. 2366–2369. IEEE (2010)

    Google Scholar 

  9. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  10. Lee, K.G., Song, S.J., Lee, S., Yu, H.G., Kim, D.I., Lee, K.M.: A deep learning-based framework for retinal fundus image enhancement. PLoS ONE 18(3), e0282416 (2023)

    Article  Google Scholar 

  11. Li, H., et al.: An annotation-free restoration network for cataractous fundus images. IEEE Trans. Med. Imaging 41(7), 1699–1710 (2022)

    Article  Google Scholar 

  12. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a ąřcompletely blindąś image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

  13. Peli, E., Peli, T.: Restoration of retinal images obtained through cataracts. IEEE Trans. Med. Imaging 8(4), 401–406 (1989)

    Article  Google Scholar 

  14. Qian, R., Tan, R.T., Yang, W., Su, J., Liu, J.: Attentive generative adversarial network for raindrop removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2482–2491 (2018)

    Google Scholar 

  15. Reza, A.M.: Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 38, 35–44 (2004)

    Article  Google Scholar 

  16. Shen, Z., Fu, H., Shen, J., Shao, L.: Modeling and enhancing low-quality retinal fundus images. IEEE Trans. Med. Imaging 40(3), 996–1006 (2020)

    Article  Google Scholar 

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  18. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  19. Wu, J., et al.: Template mask based image fusion built-in algorithm for wide field fundus cameras. In: Antony, B., Fu, H., Lee, C.S., MacGillivray, T., Xu, Y., Zheng, Y. (eds.) OMIA 2022. LNCS, vol. 13576, pp. 173–182. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16525-2_18

    Chapter  Google Scholar 

  20. Xiong, L., Li, H., Xu, L.: An enhancement method for color retinal images based on image formation model. Comput. Methods Programs Biomed. 143, 137–150 (2017)

    Article  Google Scholar 

  21. Yang, B., Zhao, H., Cao, L., Liu, H., Wang, N., Li, H.: Retinal image enhancement with artifact reduction and structure retention. Pattern Recogn. 133, 108968 (2023)

    Article  Google Scholar 

  22. Yang, Y., Wang, C., Liu, R., Zhang, L., Guo, X., Tao, D.: Self-augmented unpaired image dehazing via density and depth decomposition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2037–2046 (2022)

    Google Scholar 

  23. Yao, L., Lin, Y., Muhammad, S.: An improved multi-scale image enhancement method based on Retinex theory. J. Med. Imaging Health Inform. 8(1), 122–126 (2018)

    Article  Google Scholar 

  24. You, Q., Wan, C., Sun, J., Shen, J., Ye, H., Yu, Q.: Fundus image enhancement method based on CycleGAN. In: 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4500–4503. IEEE (2019)

    Google Scholar 

  25. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgement

This work is supported by Xinjiang Production and Construction Corps Science and Technology Project, Science and Technology Development Program in Major Fields (2022AB021), and National High Level Hospital Clinical Research Funding (BJ-2022-120, BJ-2023-104).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, J., He, M., Liu, Y., Lin, J., Huang, Z., Ding, D. (2024). Removing Stray-Light for Wild-Field Fundus Image Fusion Based on Large Generative Models. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53302-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53301-3

  • Online ISBN: 978-3-031-53302-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics