Skip to main content

Improving the Quality of Sparse-view Cone-Beam Computed Tomography via Reconstruction-Friendly Interpolation Network

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13846))

Included in the following conference series:

Abstract

Reconstructing cone-beam computed tomography (CBCT) typically utilizes a Feldkamp-Davis-Kress (FDK) algorithm to ‘translate’ hundreds of 2D X-ray projections on different angles into a 3D CT image. For minimizing the X-ray induced ionizing radiation, sparse-view CBCT takes fewer projections by a wider-angle interval, but suffers from an inferior CT reconstruction quality. To solve this, the recent solutions mainly resort to synthesizing missing projections, and force the synthesized projections to be as realistic as those actual ones, which is extremely difficult due to X-ray’s tissue superimposing. In this paper, we argue that the synthetic projections should restore FDK-required information as much as possible, while the visual fidelity is the secondary importance. Inspired by a simple fact that FDK only relies on frequency information after ramp-filtering for reconstruction, we develop a Reconstruction-Friendly Interpolation Network (RFI-Net), which first utilizes a 3D-2D attention network to learn inter-projection relations for synthesizing missing projections, and then introduces a novel Ramp-Filter loss to constrain a frequency consistency between the synthesized and real projections after ramp-filtering. By doing so, RFI-Net’s energy can be forcibly devoted to restoring more CT-reconstruction useful information in projection synthesis. We build a complete reconstruction framework consisting of our developed RFI-Net, FDK and a commonly-used CT post-refinement. Experimental results on reconstruction from only one-eighth projections demonstrate that using RFI-Net restored full-view projections can significantly improve the reconstruction quality by increasing PSNR by 2.59 dB and 2.03 dB on the walnut and patient CBCT datasets, respectively, comparing with using those restored by other state-of-the-arts.

Y. Wang and L. Chao—Co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baid, U., et al.: Deep Learning Radiomics Algorithm for Gliomas (DRAG) model: a novel approach using 3d unet based deep convolutional neural network for predicting survival in gliomas. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 369–379. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_33

    Chapter  Google Scholar 

  2. Bian, J., Siewerdsen, J.H., Han, X., Sidky, E.Y., Prince, J.L., Pelizzari, C.A., Pan, X.: Evaluation of sparse-view reconstruction from flat-panel-detector cone-beam CT. Phys. Med. Biol. 55(22), 6575 (2010)

    Article  Google Scholar 

  3. Brenner, D.J., Hall, E.J.: Computed tomography-an increasing source of radiation exposure. N. Engl. J. Med. 357(22), 2277–2284 (2007)

    Article  Google Scholar 

  4. Callahan, M.J., MacDougall, R.D., Bixby, S.D., Voss, S.D., Robertson, R.L., Cravero, J.P.: Ionizing radiation from computed tomography versus anesthesia for magnetic resonance imaging in infants and children: patient safety considerations. Pediatr. Radiol. 48(1), 21–30 (2018)

    Article  Google Scholar 

  5. Casal, R.F., et al.: Cone beam computed tomography-guided thin/ultrathin bronchoscopy for diagnosis of peripheral lung nodules: a prospective pilot study. J. Thorac. Dis. 10(12), 6950 (2018)

    Article  Google Scholar 

  6. Chao, L., Wang, Z., Zhang, H., Xu, W., Zhang, P., Li, Q.: Sparse-view cone beam CT reconstruction using dual CNNs in projection domain and image domain. Neurocomputing 493, 536–547 (2022)

    Article  Google Scholar 

  7. Chao, L., Zhang, P., Wang, Y., Wang, Z., Xu, W., Li, Q.: Dual-domain attention-guided convolutional neural network for low-dose cone-beam computed tomography reconstruction. Knowledge-Based Systems, p. 109295 (2022)

    Google Scholar 

  8. Chen, Z., Qi, H., Wu, S., Xu, Y., Zhou, L.: Few-view CT reconstruction via a novel non-local means algorithm. Physica Med. 32(10), 1276–1283 (2016)

    Article  Google Scholar 

  9. Choi, M., Kim, H., Han, B., Xu, N., Lee, K.M.: Channel attention is all you need for video frame interpolation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 10663–10671 (2020)

    Google Scholar 

  10. Der Sarkissian, H., Lucka, F., van Eijnatten, M., Colacicco, G., Coban, S.B., Batenburg, K.J.: A cone-beam x-ray computed tomography data collection designed for machine learning. Sci. Data 6(1), 1–8 (2019)

    Google Scholar 

  11. Ding, A., Gu, J., Trofimov, A.V., Xu, X.G.: Monte Carlo calculation of imaging doses from diagnostic multidetector CT and kilovoltage cone-beam CT as part of prostate cancer treatment plans. Med. Phys. 37(12), 6199–6204 (2010)

    Article  Google Scholar 

  12. Dong, X., Vekhande, S., Cao, G.: Sinogram interpolation for sparse-view micro-CT with deep learning neural network. In: Medical Imaging 2019: Physics of Medical Imaging. vol. 10948, pp. 692–698. SPIE (2019)

    Google Scholar 

  13. Geng, M., et al.: Content-noise complementary learning for medical image denoising. IEEE Trans. Med. Imaging 41(2), 407–419 (2021)

    Article  MathSciNet  Google Scholar 

  14. Han, Y., Ye, J.C.: Framing u-net via deep convolutional framelets: application to sparse-view CT. IEEE Trans. Med. Imaging 37(6), 1418–1429 (2018)

    Article  Google Scholar 

  15. Hu, D., et al.: Hybrid-domain neural network processing for sparse-view CT reconstruction. IEEE Trans. Radiation Plasma Med. Sci. 5(1), 88–98 (2020)

    Article  MathSciNet  Google Scholar 

  16. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  17. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  18. Li, M., Hsu, W., Xie, X., Cong, J., Gao, W.: Sacnn: self-attention convolutional neural network for low-dose CT denoising with self-supervised perceptual loss network. IEEE Trans. Med. Imaging 39(7), 2289–2301 (2020)

    Article  Google Scholar 

  19. Liao, H., Huo, Z., Sehnert, W.J., Zhou, S.K., Luo, J.: Adversarial sparse-view CBCT artifact reduction. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 154–162. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_18

    Chapter  Google Scholar 

  20. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)

  21. McCollough, C., et al.: Low dose CT image and projection data [data set]. The Cancer Imaging Archive (2020)

    Google Scholar 

  22. Pan, X., Sidky, E.Y., Vannier, M.: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Prob. 25(12), 123009 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Paszke, P.: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst (32), 8026

    Google Scholar 

  24. Rodet, T., Noo, F., Defrise, M.: The cone-beam algorithm of feldkamp, davis, and kress preserves oblique line integrals. Med. Phys. 31(7), 1972–1975 (2004)

    Article  Google Scholar 

  25. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  26. Shen, T., Li, X., Zhong, Z., Wu, J., Lin, Z.: R\(^{2}\)-Net: recurrent and recursive network for sparse-view CT artifacts removal. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 319–327. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_36

    Chapter  Google Scholar 

  27. Van Aarle, W., Palenstijn, W.J., Cant, J., Janssens, E., Bleichrodt, F., Dabravolski, A., De Beenhouwer, J., Batenburg, K.J., Sijbers, J.: Fast and flexible x-ray tomography using the astra toolbox. Opt. Express 24(22), 25129–25147 (2016)

    Article  Google Scholar 

  28. Wang, Q., Ma, Y., Zhao, K., Tian, Y.: A comprehensive survey of loss functions in machine learning. Annal. Data Sci. 9(2), 187–212 (2022)

    Article  Google Scholar 

  29. Yang, Q., Yan, P., Kalra, M., Wang, G.: Ct image denoising with perceptive deep neural networks. arxiv 2017. arXiv preprint arXiv:1702.07019 (2017)

  30. Zeng, G.L.: Revisit of the ramp filter. In: 2014 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), pp. 1–6. IEEE (2014)

    Google Scholar 

  31. Zhang, Y., et al.: Clear: comprehensive learning enabled adversarial reconstruction for subtle structure enhanced low-dose CT imaging. IEEE Trans. Med. Imaging 40(11), 3089–3101 (2021)

    Article  Google Scholar 

  32. Zhang, Z., Liang, X., Dong, X., Xie, Y., Cao, G.: A sparse-view CT reconstruction method based on combination of densenet and deconvolution. IEEE Trans. Med. Imaging 37(6), 1407–1417 (2018)

    Article  Google Scholar 

  33. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Key R &D Program of China (Grant No. 2022YFE0200600), National Natural Science Foundation of China (Grant No. 62202189), Fundamental Research Funds for the Central Universities (2021XXJS033), Science Fund for Creative Research Group of China (Grant No. 61721092), Director Fund of WNLO, Research grants from United Imaging Healthcare Inc.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhiwei Wang or Qiang Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Chao, L., Shan, W., Zhang, H., Wang, Z., Li, Q. (2023). Improving the Quality of Sparse-view Cone-Beam Computed Tomography via Reconstruction-Friendly Interpolation Network. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13846. Springer, Cham. https://doi.org/10.1007/978-3-031-26351-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26351-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26350-7

  • Online ISBN: 978-3-031-26351-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics