Abstract
Generative Adversarial Networks (GAN) are more and more gaining attention in the computer vision domain thanks to their ability to generate synthetic data, in particular in the context of domain adaptation and image-to-image translation. These properties are attracting the medical community too, in order to solve some complex biomedical challenges, such as the translation between different medical imaging acquisition protocols. Indeed, as the actual acquisition protocol is strongly dependent on factors such as the operator, the aim, the centre, etc., gathering cohorts of patients all sharing the same typology of imaging is an open challenge. In this paper, we propose to face this problem by using a GAN to realise a domain translation architecture in the case of breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI), considering two different acquisition protocols, in the context of automatic lesion classification. Despite this work wanting to be a first step toward artificial data generation in the medical domain, the obtained results have been analysed from both a quantitative and qualitative point of view, in order to evaluate the correctness and quality of the proposed architecture as well as its usability in a clinical scenario.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cai, N., Chen, H., Li, Y., Peng, Y., Guo, L.: Registration on DCE-MRI images via multi-domain image-to-image translation. Comput. Med. Imaging Graph. 104, 102169 (2023)
Desai, S.D., Giraddi, S., Verma, N., Gupta, P., Ramya, S.: Breast cancer detection using gan for limited labeled dataset. In: 2020 12th International Conference on Computational Intelligence and Communication Networks (CICN), pp. 34–39. IEEE (2020)
Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)
Gravina, M., et al.: Leveraging CycleGAN in lung CT Sinogram-free kernel conversion. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds.) ICIAP 2022 Part I. LNCS, vol. 13231, pp. 100–110. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-06427-2_9
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv (2018). https://arxiv.org/abs/1611.07004
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)
Modanwal, G., Vellal, A., Mazurowski, M.A.: Normalization of breast MRIs using cycle-consistent generative adversarial networks. arXiv (2019). https://arxiv.org/abs/1912.08061
Murphy, A., Niknejad, D.M.T.: Fat suppressed imaging. https://radiopaedia.org/articles/fat-suppressed-imaging?lang=us
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Sannino, C., Gravina, M., Marrone, S., Fiameni, G., Sansone, C.: Lessonable: leveraging deep fakes in MOOC content creation. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds.) ICIAP 2022 Part I. LNCS, vol. 13231, pp. 27–37. Springer, Cham (2022)
Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V., Biancone, P.: The role of artificial intelligence in healthcare: a structured literature review. BMC Med. Inform. Decis. Mak. 21, 1–23 (2021)
Shamsolmoali, P., Zareapoor, M., Granger, E., Zhou, H., Wang, R., Celebi, M.E., Yang, J.: Image synthesis with adversarial networks: a comprehensive survey and case studies. Inf. Fusion 72, 126–146 (2021)
Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116 (2017)
Tavse, S., Varadarajan, V., Bachute, M., Gite, S., Kotecha, K.: A systematic literature review on applications of GAN-synthesized images for brain MRI. Future Internet 14(12), 351 (2022)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)
Wolf, S.: Cyclegan: Learning to translate images (without paired training data) (2018). https://towardsdatascience.com/cyclegan-learning-to-translate-images-without-paired-training-data-5b4e93862c8d
Wolterink, J.M., Dinkla, A.M., Savenije, M.H., Seevinck, P.R., van den Berg, C.A., Isgum, I.: Deep MR to CT synthesis using unpaired data. arXiv (2017). https://arxiv.org/abs/1708.01155
Xie, G., et al.: Fedmed-gan: Federated domain translation on unsupervised cross-modality brain image synthesis (2022). Available at SSRN 4342071
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv (2020). https://arxiv.org/abs/1703.10593
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Galli, A., Gravina, M., Marrone, S., Sansone, C. (2023). Generative Adversarial Networks for Domain Translation in Unpaired Breast DCE-MRI Datasets. In: Conte, D., Fred, A., Gusikhin, O., Sansone, C. (eds) Deep Learning Theory and Applications. DeLTA 2023. Communications in Computer and Information Science, vol 1875. Springer, Cham. https://doi.org/10.1007/978-3-031-39059-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-39059-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-39058-6
Online ISBN: 978-3-031-39059-3
eBook Packages: Computer ScienceComputer Science (R0)