Skip to main content

Coordinated Reconstruction Dual-Branch Network for Low-Dose PET Reconstruction

  • Conference paper
  • First Online:
Theoretical Computer Science (NCTCS 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1944))

Included in the following conference series:

  • 110 Accesses

Abstract

Positron Emission Tomography (PET), known for its sensitivity and non-invasiveness in visualizing metabolic processes in the human body, has been widely utilized for clinical diagnosis. However, the procedure of PET imaging requires the administration of a radioactive tracer, which poses potential risks to human health. Reducing the usage of radioactive tracers leads to lower information content and increased independent noise. Therefore, the reconstruction of low-dose PET images becomes crucial. Existing reconstruction methods that learn a single mapping for low-dose PET reconstruction often suffer from over-denoising or incomplete information. To address this challenge, this work investigates the generation of realistic full-dose PET images. Firstly, we propose a simple yet reasonable low-dose PET model that treats each reconstructed voxel as a random variable. This model divides the reconstruction problem into two sub-problems: noise suppression and missing data recovery. Subsequently, we introduce a novel framework called the Coordinated Reconstruction Dual Branch Network (CRDB). The CRDB utilizes dual branches to separately perform denoising and information completion for PET reconstruction. Moreover, the CRDB leverages the Fast Channel Attention mechanism to capture diverse and unique information from different channels. Additionally, to emphasize pronounced distinctions, we adopt the Huber loss as the loss function. Quantitative experiments demonstrate that our strategy achieves favorable results in low-dose PET reconstruction.

Data used in preparation of this article were obtained from the University of Bern, Dept. of Nuclear Medicine and School of Medicine, Ruijin Hospital. As such, the investigators contributed to the design and implementation of DATA and/or provided data but did not participate in analysis or writing of this report. A complete listing of investigators can be found at: “https://ultra-low-dose-pet.grandchallenge.org/Description/”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Schrevens, L., Lorent, N., Dooms, C., Vansteenkiste, J.: The role of pet scan in diagnosis, staging, and management of non-small cell lung cancer. Oncologist 9(6), 633–643 (2004)

    Article  Google Scholar 

  2. Zhou, B., Tsai, Y.-J., Chen, X., Duncan, J.S., Liu, C.: MDPET: a unified motion correction and denoising adversarial network for low-dose gated pet. IEEE Trans. Med. Imaging 40(11), 3154–3164 (2021)

    Article  Google Scholar 

  3. An, L., et al.: Multi-level canonical correlation analysis for standard-dose pet image estimation. IEEE Trans. Image Process. 25(7), 3303–3315 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Kaplan, S., Zhu, Y.-M.: Full-dose pet image estimation from low-dose pet image using deep learning: a pilot study. J. Digit. Imaging 32(5), 773–778 (2019)

    Article  Google Scholar 

  5. Xu, J., Gong, E., Pauly, J., Zaharchuk, G.: 200x low-dose pet reconstruction using deep learning. arXiv preprint arXiv:1712.04119 (2017)

  6. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  7. Wang, T., et al.: Machine learning in quantitative pet: a review of attenuation correction and low-count image reconstruction methods. Physica Med. 76, 294–306 (2020)

    Article  Google Scholar 

  8. Cui, J., et al.: Pet image denoising using unsupervised deep learning. Eur. J. Nuclear Med. Mol. Imaging 46, 2780–2789 (2019)

    Article  Google Scholar 

  9. Wang, Y., et al.: 3D conditional generative adversarial networks for high-quality pet image estimation at low dose. Neuroimage 174, 550–562 (2018)

    Article  Google Scholar 

  10. Zhou, Y., et al.: 3D segmentation guided style-based generative adversarial networks for pet synthesis. IEEE Trans. Med. Imaging 41(8), 2092–2104 (2022)

    Article  Google Scholar 

  11. Cai, Y., Xiaowan, H., Wang, H., Zhang, Y., Pfister, H., Wei, D.: Learning to generate realistic noisy images via pixel-level noise-aware adversarial training. Adv. Neural. Inf. Process. Syst. 34, 3259–3270 (2021)

    Google Scholar 

  12. Huber, P.J.: Robust estimation of a location parameter. In: Kotz, S., Johnson, N.L. (eds.) Breakthroughs in Statistics: Methodology and Distribution, pp. 492–518. Springer, New York (1992). https://doi.org/10.1007/978-1-4612-4380-9_35

  13. Kang, J., Gao, Y., Shi, F., Lalush, D.S., Lin, W., Shen, D.: Prediction of standard-dose brain pet image by using MRI and low-dose brain [18f] FDG pet images. Med. Phys. 42(9), 5301–5309 (2015)

    Article  Google Scholar 

  14. Wang, Y., et al.: Predicting standard-dose pet image from low-dose pet and multimodal MR images using mapping-based sparse representation. Phys. Med. Biol. 61(2), 791 (2016)

    Article  MathSciNet  Google Scholar 

  15. Wang, Y., et al.: Semisupervised tripled dictionary learning for standard-dose pet image prediction using low-dose pet and multimodal MRI. IEEE Trans. Biomed. Eng. 64(3), 569–579 (2016)

    Article  Google Scholar 

  16. Wangerin, K.A., Ahn, S., Wollenweber, S., Ross, S.G., Kinahan, P.E., Manjeshwar, R.M.: Evaluation of lesion detectability in positron emission tomography when using a convergent penalized likelihood image reconstruction method. J. Med. Imaging 4(1), 011002 (2017)

    Article  Google Scholar 

  17. Lei Xiang, Yu., Qiao, D.N., An, L., Lin, W., Wang, Q., Shen, D.: Deep auto-context convolutional neural networks for standard-dose pet image estimation from low-dose pet/MRI. Neurocomputing 267, 406–416 (2017)

    Article  Google Scholar 

  18. Wang, Y., et al.: 3D auto-context-based locality adaptive multi-modality GANs for pet synthesis. IEEE Trans. Med. Imaging 38(6), 1328–1339 (2018)

    Article  Google Scholar 

  19. Sanaei, B., Faghihi, R., Arabi, H., Zaidi, H.: Does prior knowledge in the form of multiple low-dose pet images (at different dose levels) improve standard-dose pet prediction? In: 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), pp. 1–3. IEEE (2021)

    Google Scholar 

  20. Lei, Y., et al.: Whole-body pet estimation from low count statistics using cycle-consistent generative adversarial networks. Phys. Med. Biol. 64(21), 215017 (2019)

    Article  Google Scholar 

  21. Zhang, M., Liu, L., Jiang, D.: Joint semantic-aware and noise suppression for low-light image enhancement without reference. Signal Image Video Process. 17, 3847–3855 (2023)

    Article  Google Scholar 

  22. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  23. Xue, S., et al.: A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose PET. Eur. J. Nucl. Med. Mol. Imaging 49, 1843–1856 (2021). https://doi.org/10.1007/s00259-021-05644-1

    Article  Google Scholar 

  24. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFF0606303, the National Natural Science Foundation of China under Grant 62206054, Research Capacity Enhancement Project of Key Construction Discipline in Guangdong Province under Grant 2022ZDJS028. Thanks to Xue Song, Kuangyu Shi & Axel Rominger, Dept. of Nuclear Medicine of the University of Bern, Hanzhong Wang, Rui Guo & Biao Li, Ruijin Hospital, Shanghai Jiaotong University for support of the source of the DATA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianping Yin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y. et al. (2024). Coordinated Reconstruction Dual-Branch Network for Low-Dose PET Reconstruction. In: Cai, Z., Xiao, M., Zhang, J. (eds) Theoretical Computer Science. NCTCS 2023. Communications in Computer and Information Science, vol 1944. Springer, Singapore. https://doi.org/10.1007/978-981-99-7743-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7743-7_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7742-0

  • Online ISBN: 978-981-99-7743-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics