Skip to main content

MultiGAN: Multi-domain Image Translation from OCT to OCTA

  • Conference paper
  • First Online:
Book cover Pattern Recognition and Computer Vision (PRCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13535))

Included in the following conference series:

Abstract

Optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) are important imaging techniques for assessing and managing retinal diseases. OCTA can display more blood vessel information than OCT, which, however, requires software and hardware modifications on OCT devices. A large number of OCT data does not have corresponding OCTA data, which greatly limits doctors’ diagnosis. Considering the inconvenience of acquiring OCTA images and inevitable mechanical artifacts, we introduce image-to-image translation to generate OCTA from OCT. In this paper, we propose a novel method, MultiGAN, which uses one input image to get three target domain outputs without relying on domain code. We utilize the resnet block in skip connections to preserve details. A domain dependent loss is proposed to impose the restrictions among OCTA projection maps. The dataset containing paired OCT and OCTA images from 500 eyes diagnosed with various retinal diseases is used to evaluate the performance of the proposed network. The results based on cross validation experiments demonstrate the stability and superior performances of the proposed model comparing with state-of-the-art models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Pang, Y., Lin, J., Qin, T., Chen, Z.: Image-to-image translation: methods and applications. IEEE Trans. Multimed. 24, 3859–3881 (2021)

    Article  Google Scholar 

  2. Kazemi, H., Soleymani, S., Taherkhani, F., Iranmanesh, S., Nasrabadi, N.: Unsupervised image-to-image translation using domain-specific variational information bound. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  3. Li, R., Cao, W., Jiao, Q., Wu, S., Wong, H.S.: Simplified unsupervised image translation for semantic segmentation adaptation. Pattern Recogn. 105, 107343 (2020)

    Article  Google Scholar 

  4. Cao, J., Huang, H., Li, Y., He, R., Sun, Z.: Informative sample mining network for multi-domain image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 404–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_24

    Chapter  Google Scholar 

  5. Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 179–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_11

    Chapter  Google Scholar 

  6. Yang, H.L., et al.: Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images. PLoS ONE 14(4), e0215076 (2019)

    Article  Google Scholar 

  7. Chalam, K., Sambhav, K.: Optical coherence tomography angiography in retinal diseases. J. Ophthalmic Vis. Res. 11(1), 84 (2016)

    Article  Google Scholar 

  8. Laıns, I., et al.: Retinal applications of swept source optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). Prog. Retin. Eye Res. 84, 100951 (2021)

    Article  Google Scholar 

  9. Li, M., et al.: IPN-V2 and OCTA-500: methodology and dataset for retinal image segmentation. arXiv preprint arXiv:2012.07261 (2020)

  10. Lee, C.S., et al.: Generating retinal flow maps from structural optical coherence tomography with artificial intelligence. Sci. Rep. 9(1), 1–11 (2019)

    Google Scholar 

  11. Zhang, Z., Ji, Z., Chen, Q., Yuan, S., Fan, W.: Texture-guided U-Net for OCT-to-OCTA generation. In: Ma, H., et al. (eds.) PRCV 2021. LNCS, vol. 13022, pp. 42–52. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88013-2_4

    Chapter  Google Scholar 

  12. Kadomoto, S., Uji, A., Muraoka, Y., Akagi, T., Tsujikawa, A.: Enhanced visualization of retinal microvasculature in optical coherence tomography angiography imaging via deep learning. J. Clin. Med. 9(5), 1322 (2020)

    Article  Google Scholar 

  13. Li, X.X., et al.: A quantitative comparison of five optical coherence tomography angiography systems in clinical performance. Int. J. Ophthalmol. 11(11), 1784 (2018)

    Google Scholar 

  14. Chen, Q., Niu, S., Yuan, S., Fan, W., Liu, Q.: High-low reflectivity enhancement based retinal vessel projection for SD-OCT images. Med. Phys. 43(10), 5464–5474 (2016)

    Article  Google Scholar 

  15. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  16. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  17. Choi, Y., Uh, Y., Yoo, J., Ha, J.W.: StarGAN v2: diverse image synthesis for multiple domains. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8188–8197 (2020)

    Google Scholar 

  18. Yu, X., Cai, X., Ying, Z., Li, T., Li, G.: SingleGAN: image-to-image translation by a single-generator network using multiple generative adversarial learning. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11365, pp. 341–356. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20873-8_22

    Chapter  Google Scholar 

  19. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  20. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 179–187. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46976-8_19

    Chapter  Google Scholar 

  21. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR abs/1505.04597. arXiv preprint arXiv:1505.04597 (2015)

  22. Wen, L., Li, X., Gao, L.: A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl. 32(10), 6111–6124 (2020). https://doi.org/10.1007/s00521-019-04097-w

    Article  MathSciNet  Google Scholar 

  23. Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: BAM: bottleneck attention module. arXiv preprint arXiv:1807.06514 (2018)

  24. Garg, A., Gowda, D., Kumar, A., Kim, K., Kumar, M., Kim, C.: Improved multi-stage training of online attention-based encoder-decoder models. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 70–77. IEEE (2019)

    Google Scholar 

  25. Wolterink, J.M., Leiner, T., Viergever, M.A., Išgum, I.: Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017)

    Article  Google Scholar 

  26. Sara, U., Akter, M., Uddin, M.S.: Image quality assessment through FSIM, SSIM, MSE and PSNR-a comparative study. J. Comput. Commun. 7(3), 8–18 (2019)

    Article  Google Scholar 

  27. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  28. Li, X., et al.: Image-to-image translation via hierarchical style disentanglement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8639–8648 (2021)

    Google Scholar 

  29. Bates, R., Chocholek, M., Fox, C., Howe, J., Jones, N.: SIFID Scottish inshore fisheries integrated data system: WP 3 final report: development of a novel, automated mechanism for the collection of scallop stock data (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zexuan Ji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pan, B., Ji, Z., Chen, Q. (2022). MultiGAN: Multi-domain Image Translation from OCT to OCTA. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13535. Springer, Cham. https://doi.org/10.1007/978-3-031-18910-4_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18910-4_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18909-8

  • Online ISBN: 978-3-031-18910-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics