Abstract
To obtain high-quality positron emission tomography (PET) image at low dose, this study proposes an end-to-end 3D generative adversarial network embedded with transformer, namely Transformer-GAN, to reconstruct the standard-dose PET (SPET) image from the corresponding low-dose PET (LPET) image. Specifically, considering the convolutional neural network (CNN) can well describe the local spatial features, while the transformer is good at capturing the long-range semantic information due to its global information extraction ability, our generator network takes advantages of both CNN and transformer, and is designed as an architecture of EncoderCNN-Transformer-DecoderCNN. Particularly, the EncoderCNN aims to extract compact feature representations with rich spatial information by using CNN, while the Transformer targets at capturing the long-range dependencies between the features learned by the EncoderCNN. Finally, the DecoderCNN is responsible for restoring the reconstructed PET image. Moreover, to ensure the similarity of voxel-level intensities as well as the data distributions between the reconstructed image and the real image, we harness both the voxel-wise estimation error and the adversarial loss to train the generator network. Validations on the clinical PET data show that our proposed method outperforms the state-of-the-art methods in both qualitative and quantitative measures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, W.: Clinical applications of PET in brain tumors. J. Nucl. Med. 48(9), 1468–1481 (2007)
Xiang, L., Qiao, Y., Nie, D., et al.: Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 267, 406–416 (2017)
Kim, K., Wu, D., Gong, K., et al.: Penalized PET reconstruction using deep learning prior and local linear fitting. IEEE Trans. Med. Imaging 37(6), 1478–1487 (2018)
Wang, Y., Yu, B., Wang, L., et al.: 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage 174, 550–562 (2018)
Wang, Y., Zhou, L., Yu, B., et al.: 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis. IEEE Trans. Med. Imaging 38(6), 1328–1339 (2019)
Feng, Q., Liu, H.: Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12267, pp. 783–792. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59728-3_76
Gong, K., Guan, J., Liu, C.C., et al.: PET image denoising using a deep neural network through fine tuning. IEEE Trans. Radiat. Plasma Med. Sci. 3(2), 153–161 (2018)
Spuhler, K., Serrano-Sosa, M., Cattell, R., et al.: Full-count PET recovery from low-count image using a dilated convolutional neural network. Med. Phys. 47(10), 4928–4938 (2020)
Xu, J., Gong, E., Pauly, J., et al.: 200x low-dose PET reconstruction using deep learning. arXiv preprint arXiv:1712.04119 (2017)
Xiang, L., Wang, L., Gong, E., Zaharchuk, G., Zhang, T.: Noise-aware standard-dose PET reconstruction using general and adaptive robust loss. In: Liu, M., Yan, P., Lian, C., Cao, X. (eds.) MLMI 2020. LNCS, vol. 12436, pp. 654–662. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59861-7_66
Khan, S., Naseer, M., Hayat, M., et al.: Transformers in vision: a Survey. arXiv preprint arXiv:2101.01169 (2021)
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Carion, N., Massa, F., Synnaeve, G., et al.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Wang, H., Zhu, Y., Adam, H., et al.: MaX-DeepLab: end-to-end panoptic segmentation with mask transformers. arXiv preprint arXiv:2012.00759 (2020)
Parmar, N., Vaswani, A., Uszkoreit, J., et al.: Image transformer. In: International Conference on Machine Learning. PMLR, pp. 4055–4064 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. International Conference on Medical image Computing and Computer-Assisted Intervention, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Goodfellow, I., Pouget-Abadie, J., Mirza, M.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)
Acknowledgement
This work is supported by National Natural Science Foundation of China (NFSC 62071314) and Sichuan Science and Technology Program (2021YFG0326, 2020YFG0079).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Luo, Y. et al. (2021). 3D Transformer-GAN for High-Quality PET Reconstruction. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12906. Springer, Cham. https://doi.org/10.1007/978-3-030-87231-1_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-87231-1_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87230-4
Online ISBN: 978-3-030-87231-1
eBook Packages: Computer ScienceComputer Science (R0)