Abstract
Fusing multi-modality medical images, such as MR and PET, can provide complementary information to improve the diagnostic performance. But compared to the substantial and available MR data, PET data is always deficient. In this paper, we propose a novel end-to-end network, called Bidirectional GAN, where image contexts and latent vector are effectively used and jointly optimized for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the diverse brain structural details into the high-dimensional latent space. And then the superior network architecture and the modified loss functions are further utilized to enhance the quality of synthetic images. The most appealing part is that the proposed method can synthesize the plausible PET images while preserving the diverse brain structures of different subjects. The experiments demonstrate that the performance of the proposed method outperforms the state-of-the-art methods in terms of quantitative measures, qualitative evaluation and the improvement of classification accuracy.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Ernst, P., Hille, G., Hansen, C., Tönnies, K., Rak, M.: A CNN-based framework for statistical assessment of spinal shape and curvature in whole-body MRI images of large populations. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 3–11. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_1
Ben-Cohen, A., et al.: Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Eng. Appl. Artif. Intell. 78, 186–194 (2019)
Li, H., et al.: A novel PET tumor delineation method based on adaptive region-growing and dual-front active contours. Med. Phys. 35(8), 3711–3721 (2008)
Burgos, N., et al.: Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans. Med. Imaging 33(12), 2332–2341 (2014)
Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Çukur, T.: Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38(10), 2375–2388 (2019)
Papadimitroulas, P., et al.: Investigation of realistic PET simulations incorporating tumor patient’s specificity using anthropomorphic models: creation of an oncology database. Med. Phys. 40(11), 112506 (2013)
Li, R., et al.: Deep learning based imaging data completion for improved brain disease diagnosis. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8675, pp. 305–312. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10443-0_39
Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_48
Xiang, L., Li, Y., Lin, W., Wang, Q., Shen, D.: Unpaired deep cross-modality synthesis with fast training. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 155–164. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_18
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409 (2017)
Xian, W., et al.: TextureGAN: controlling deep image synthesis with texture patches. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8456–8465 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Jack Jr., C.R., et al.: The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging: Off. J. Int. Soc. Magn. Reson. Med. 27(4), 685–691 (2008)
Li, X., Chen, H., Qi, X., Dou, Q., Fu, C.W., Heng, P.A.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (2018)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Acknowledgements
This work was supported in part by the National Natural Science Foundations of China under Grant 61872351 and Grant 61771465, in part by the International Science and Technology Cooperation Projects of Guangdong under Grant 2019A050510030, in part by the Strategic Priority CAS Project under Grant XDB38000000, in part by the Major Projects from General Logistics Department of People’s Liberation Army under Grant AWS13C008, and in part by the Shenzhen Key Basic Research Projects under Grant JCYJ2020050718250-6416.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, S., Shen, Y., Wang, S., Lei, B. (2020). Brain MR to PET Synthesis via Bidirectional Generative Adversarial Network. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_67
Download citation
DOI: https://doi.org/10.1007/978-3-030-59713-9_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59712-2
Online ISBN: 978-3-030-59713-9
eBook Packages: Computer ScienceComputer Science (R0)