Skip to main content

Cross-Attention for Improved Motion Correction in Brain PET

  • Conference paper
  • First Online:
Machine Learning in Clinical Neuroimaging (MLCN 2023)

Abstract

Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahn, S.S., et al.: Co-attention spatial transformer network for unsupervised motion tracking and cardiac strain analysis in 3D echocardiography. Med. Image Anal. 84, 102711 (2023)

    Article  Google Scholar 

  2. Cai, Z., Xin, J., Liu, S., Wu, J., Zheng, N.: Architecture and factor design of fully convolutional neural networks for retinal vessel segmentation. In: 2018 Chinese Automation Congress (CAC), pp. 3076–3080. IEEE (2018)

    Google Scholar 

  3. Cai, Z., Xin, J., Shi, P., Wu, J., Zheng, N.: DSTUnet: Unet with efficient dense Swin transformer pathway for medical image segmentation. In: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), pp. 1–5. IEEE (2022)

    Google Scholar 

  4. Cai, Z., Xin, J., Shi, P., Zhou, S., Wu, J., Zheng, N.: Meta pixel loss correction for medical image segmentation with noisy labels. In: Zamzmi, G., Antani, S., Bagci, U., Linguraru, M.G., Rajaraman, S., Xue, Z. (eds.) Medical Image Learning with Limited and Noisy Data. MILLanD 2022. LNCS, vol. 13559, pp. 32–41. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16760-7_4

  5. Cai, Z., Xin, J., Wu, J., Liu, S., Zuo, W., Zheng, N.: Triple multi-scale adversarial learning with self-attention and quality loss for unpaired fundus fluorescein angiography synthesis. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1592–1595. IEEE (2020)

    Google Scholar 

  6. Carson, R.E., Barker, W.C., Liow, J.S., Johnson, C.A.: Design of a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction for the HRRT. In: 2003 IEEE Nuclear Science Symposium. Conference Record (IEEE CAT. No. 03CH37515), vol. 5, pp. 3281–3285. IEEE (2003)

    Google Scholar 

  7. Chen, X., et al.: Dual-branch squeeze-fusion-excitation module for cross-modality registration of cardiac Spect and CT. In: Wang, L., Dou, ., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention. MICCAI 2022. LNCS, vol. 13436 pp. 46–55. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16446-0_5

  8. Fischl, B.: FreeSurfer. Neuroimage 62(2), 774–781 (2012)

    Article  Google Scholar 

  9. Jin, X., Mulnix, T., Gallezot, J.D., Carson, R.E.: Evaluation of motion correction methods in human brain pet imaging-a simulation study based on human motion data. Med. Phys. 40(10), 102503 (2013)

    Article  Google Scholar 

  10. Kuang, Z., et al.: Progress of SIAT bPET: an MRI compatible brain PET scanner with high spatial resolution and high sensitivity (2022)

    Google Scholar 

  11. Kuang, Z., et al.: Design and performance of SIAT aPET: a uniform high-resolution small animal pet scanner using dual-ended readout detectors. Phys. Med. Biol. 65(23), 235013 (2020)

    Article  Google Scholar 

  12. Kyme, A.Z., Fulton, R.R.: Motion estimation and correction in SPECT PET and CT. Phys. Med. Biol. 66(18), 18TR02 (2021)

    Article  Google Scholar 

  13. Lieffrig, E.V., et al.: Multi-task deep learning and uncertainty estimation for PET head motion correction. In: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI 2023), pp. 1–4, April 2023

    Google Scholar 

  14. Lu, Y., et al.: Data-Driven motion detection and event-by-event correction for brain PET: comparison with Vicra. J. Nucl. Med. 61(9), 1397–1403 (2020)

    Article  MathSciNet  Google Scholar 

  15. Papademetris, X., Jackowski, M.P., Rajeevan, N., DiStasio, M., Okuda, H., Constable, R.T., Staib, L.H.: Bioimage suite: an integrated medical image analysis suite: an update. Insight J. 2006, 209 (2006)

    Google Scholar 

  16. Revilla, E.M., et al.: Adaptive data-driven motion detection and optimized correction for brain pet. Neuroimage 252, 119031 (2022)

    Article  Google Scholar 

  17. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  18. Spangler-Bickell, M.G., Deller, T.W., Bettinardi, V., Jansen, F.: Ultra-fast list-mode reconstruction of short pet frames and example applications. J. Nucl. Med. 62(2), 287–292 (2021)

    Article  Google Scholar 

  19. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  20. You, C., Dai, W., Min, Y., Staib, L., Duncan, J.S.: Implicit anatomical rendering for medical image segmentation with stochastic experts. arXiv preprint arXiv:2304.03209 (2023)

  21. You, C., et al.: Incremental learning meets transfer learning: application to multi-site prostate MRI segmentation. In: Albarqouni, S., et al. (eds.) Distributed, Collaborative, and Federated Learning, and Affordable AI and Healthcare for Resource Diverse Global Health. DeCaF FAIR 2022 2022. LNCS, vol. 13573, pp. 3–16. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18523-6_1

  22. You, C., et al.: Class-aware adversarial transformers for medical image segmentation. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  23. Zeng, T., et al.: A GPU-accelerated fully 3d Osem image reconstruction for a high-resolution small animal pet scanner using dual-ended readout detectors. Phys. Med. Biol. 65(24), 245007 (2020)

    Article  Google Scholar 

  24. Zeng, T., et al.: Supervised deep learning for head motion correction in pet. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention-MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part IV, pp. 194–203. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16440-8_19

  25. Zeng, T., et al.: Design and system evaluation of a dual-panel portable pet (DP-PET). EJNMMI Phys. 8(1), 1–16 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China 2017YFA0700800 and the National Institute of Health (NIH) R21 EB028954.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianyi Zeng .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 277 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cai, Z. et al. (2023). Cross-Attention for Improved Motion Correction in Brain PET. In: Abdulkadir, A., et al. Machine Learning in Clinical Neuroimaging. MLCN 2023. Lecture Notes in Computer Science, vol 14312. Springer, Cham. https://doi.org/10.1007/978-3-031-44858-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44858-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44857-7

  • Online ISBN: 978-3-031-44858-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics