Abstract
Annotating medical images, especially fundus images that contain complex structures, needs expertise and time. To this end, fundus image synthesis methods were proposed to obtain specific categories of samples by combining vessel components and basic fundus images, during which well-segmented vessels from real fundus images were always required. Being different from these methods, We present a one-stage fundus image generating network to obtain healthy fundus images from scratch. First, we propose a basic attention Generator to present both global and local features. Second, we guide the Generator to focus on multi-scale fundus texture and structure features for better synthesis. Third, we design a self-motivated strategy to construct a vessel assisting module for vessel refining. By integrating the three proposed sub-modules, our fundus synthesis network, termed as FundusGAN, is built to provide one-stage fundus image generation without extra references. As a result, the synthetic fundus images are anatomically consistent with real images and demonstrate both diversity and reasonable visual quality.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
The dataset is available at https://odir2019.grand-challenge.org/dataset.
- 2.
The code is available at https://github.com/juntang-zhuang/LadderNet.
References
Abràmoff, M.D., Garvin, M.K., Sonka, M.: Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 3, 169–208 (2010)
Fiorini, S., et al.: Automatic generation of synthetic retinal fundus images: vascular network. In: 20th Conference on Medical Image Understanding and Analysis (MIUA), pp. 54–60. Springer, Leicestershire (2016). https://doi.org/10.1016/j.procs.2016.07.010
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2678–2680. MIT, Montreal (2014)
Costa, P., Galdran, A., Meyer, M.I., Abramoff, M.D., Niemeijer, M., Mendonça, A., Campilho, A.: Towards adversarial retinal image synthesis. arXiv preprint arXiv:1701.08974 (2017)
Costa, P., Galdran, A., Meyer, M.I., Niemeijer, M., Abràmoff, M., Mendonça, A.M., Campilho, A.: End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imaging 37(3), 781–791 (2018)
Kurach, K., Lučić, M., Zhai, X., Michalski, M., Gelly, S.: A large-scale study on regularization and normalization in GANs. In: ICML (2019)
Roth, K., Lucchi, A., Nowozin, S., Hofmann, T.: Stabilizing training of generative adversarial networks through regularization. In: NIPS, pp. 2015–2025. MIT, California (2017)
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS, pp. 2234–2242. MIT, Barcelona (2016)
Denton, E., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: NIPS, pp. 1486–1494. MIT, Barcelona (2016)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML (2017)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. In: NIPS, pp. 5769–5779. MIT, California (2017)
Jiang, Y., Chang, S., Wang, Z.: TransGAN: two pure transformers can make one strong GAN, and that can scale up. In: NIPS, pp. 14745–14758. Curran Associates Inc, New Orleans (2021)
Kamran, S.A., Hossain, K.F., Tavakkoli, A., Zuckerbrod, S.L., Baker, S.A.: VTGAN: semi-supervised retinal image synthesis and disease prediction using vision Transformers. In: ICCV (2021)
Yu, Z., Xiang, Q., Meng, J., Kou, C., Ren, Q., Lu, Y.: Retinal image synthesis from multiple-landmarks input with generative adversarial networks. Biomed. Eng. Online 10(1), 1–15 (2019)
Liu, Y.-C., et al.: Synthesizing new retinal symptom images by multiple generative models. In: Carneiro, G., You, S. (eds.) ACCV 2018. LNCS, vol. 11367, pp. 235–250. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21074-8_19
He, Z., Huiqi, L., Sebastian, M., Li, C.: Synthesizing retinal and neuronal images with generative adversarial nets. Med. Image Anal. 49, 14–26 (2018)
Sengupta, S., Athwale, A., Gulati, T., Zelek, J., Lakshminarayanan, V.: FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis. In: Medical Imaging 2020: Image Processing, vol. 11313, pp. 15–10 (2020)
Ding, K., Ma, K., Wang, S., Simoncelli, E.P.: Image quality assessment: unifying structure and texture similarity. IEEE Trans. Pattern Anal. Mach. Intell. 445(5), 2567–2581 (2022)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Li, T., Gao, Y., Wang, K., Guo, S., Liu, H., Kang, H.: Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 501, 511–522 (2019)
Decencière, E., et al.: Feedback on a publicly distributed database: the Messidor database. Image Anal. Stereol. 33, 231–234 (2014)
Li, K., Qi, X., Luo, Y., Yao, Z., Zhou, X., Sun, M.: Accurate retinal vessel segmentation in color fundus images via fully attention-based networks. IEEE J. Biomed. Health Inform. 25, 2071–2081 (2021)
Christian L., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 105–114 (2017)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-Scale update rule converge to a local Nash equilibrium. In: NIPS, pp. 6629–6640. Curran Associates Inc., California (2017)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing, improving the image quality of StyleGAN. In: CVPR, pp. 8110–8119 (2020)
Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: ICML (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cai, C., Xia, X., Fang, Y. (2022). FundusGAN: A One-Stage Single Input GAN for Fundus Synthesis. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13535. Springer, Cham. https://doi.org/10.1007/978-3-031-18910-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-18910-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18909-8
Online ISBN: 978-3-031-18910-4
eBook Packages: Computer ScienceComputer Science (R0)