Abstract
To reduce the potential radiation risk, low-dose Single Photon Emission Computed Tomography (SPECT) is of increasing interest. Many deep learning-based methods have been developed to perform low-dose imaging while maintaining image quality. However, most of the existing methods ignore the unique inner-structure inherent in the original sinogram, limiting their restoration ability. In this paper, we propose a GNN-CNN-UNet (GCUNet) to learn the non-local and local structures of the sinogram using Graph Neural Network (GNN) and Convolutional Neural Network (CNN), respectively, for the task of low-dose SPECT sinogram restoration. In particular, we propose a sinogram-structure-based self-defined neighbors GNN (SSN-GNN) method combined with the Window-KNN-based GNN (W-KNN-GNN) module to construct the underlying graph structure. Afterwards, we employ the maximum likelihood expectation maximization (MLEM) to reconstruct the restored sinogram. The XCAT dataset is used to evaluate the performance of the proposed GCUNet. Experimental results demonstrate that, compared to several reconstruction methods, the proposed method achieves significant improvement in both noise reduction and structure preservation.
Supported by Natural Science Foundation of Guangdong under 2022A1515012379.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brenner, D.J., Hall, E.J.: Computed tomography-an increasing source of radiation exposure. N. Engl. J. Med. 357(22), 2277–2284 (2007)
Chen, H., et al.: Low-dose ct with a residual encoder-decoder convolutional neural network. IEEE Trans. Med. Imaging 36(12), 2524–2535 (2017)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems 30 (2017)
Han, K., Wang, Y., Guo, J., Tang, Y., Wu, E.: Vision gnn: an image is worth graph of nodes. arXiv preprint arXiv:2206.00272 (2022)
Khalid, F., Javed, A., Ilyas, H., Irtaza, A., et al.: Dfgnn: an interpretable and generalized graph neural network for deepfakes detection. Expert Syst. Appl. 222, 119843 (2023)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Krol, A., Li, S., Shen, L., Xu, Y.: Preconditioned alternating projection algorithms for maximum a posteriori ect reconstruction. Inverse Prob. 28(11), 115005 (2012)
Li, G., Muller, M., Thabet, A., Ghanem, B.: Deepgcns: can gcns go as deep as cnns? In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9267–9276 (2019)
Li, S., Ye, W., Li, F.: Lu-net: combining lstm and u-net for sinogram synthesis in sparse-view spect reconstruction. Math. Biosci. Eng. 19(4), 4320–40 (2022)
Ljungberg, M., Strand, S.E., King, M.A.: Monte Carlo calculations in nuclear medicine: applications in diagnostic imaging. CRC Press (2012)
Luthra, A., Sulakhe, H., Mittal, T., Iyer, A., Yadav, S.: Eformer: edge enhancement based transformer for medical image denoising. arXiv preprint arXiv:2109.08044 (2021)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Segars, W.P., Sturgeon, G., Mendonca, S., Grimes, J., Tsui, B.M.: 4d xcat phantom for multimodality imaging research. Med. Phys. 37(9), 4902–4915 (2010)
Shepp, L.A., Vardi, Y.: Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1(2), 113–122 (1982)
Shi, P., Guo, X., Yang, Y., Ye, C., Ma, T.: Nextou: efficient topology-aware u-net for medical image segmentation. arXiv preprint arXiv:2305.15911 (2023)
Thrall, J.H., Ziessman, H.: Nuclear medicine: the requisites. Mosby-Year Book, Inc., p. 302 (1995)
Tian, C., Xu, Y., Li, Z., Zuo, W., Fei, L., Liu, H.: Attention-guided cnn for image denoising. Neural Netw. 124, 117–129 (2020)
Wang, D., Fan, F., Wu, Z., Liu, R., Wang, F., Yu, H.: Ctformer: convolution-free token2token dilated vision transformer for low-dose ct denoising. Phys. Med. Biol. 68(6), 065012 (2023)
Wells, R.G.: Dose reduction is good but it is image quality that matters. J. Nucl. Cardiol. 27, 238–240 (2020)
Wu, W., Hu, D., Niu, C., Yu, H., Vardhanabhuti, V., Wang, G.: Drone: dual-domain residual-based optimization network for sparse-view ct reconstruction. IEEE Trans. Med. Imaging 40(11), 3002–3014 (2021)
Yang, Q., et al.: Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging 37(6), 1348–1357 (2018)
Zhang, Z., Yu, L., Liang, X., Zhao, W., Xing, L.: TransCT: dual-path transformer for low dose computed tomography. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 55–64. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_6
Zhou, B., Chen, X., Zhou, S.K., Duncan, J.S., Liu, C.: Dudodr-net: dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography. Med. Image Anal. 75, 102289 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, K., Liang, Z., Li, S. (2024). GCUNET: Combining GNN and CNN for Sinogram Restoration in Low-Dose SPECT Reconstruction. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14437. Springer, Singapore. https://doi.org/10.1007/978-981-99-8558-6_40
Download citation
DOI: https://doi.org/10.1007/978-981-99-8558-6_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8557-9
Online ISBN: 978-981-99-8558-6
eBook Packages: Computer ScienceComputer Science (R0)