Abstract
High-quality MRI reconstruction plays a critical role in clinical applications. Deep learning-based methods have achieved promising results on MRI reconstruction. However, most state-of-the-art methods were designed to optimize the evaluation metrics commonly used for natural images, such as PSNR and SSIM, whereas the visual quality is not primarily pursued. Compared to the fully-sampled images, the reconstructed images are often blurry, where high-frequency features might not be sharp enough for confident clinical diagnosis. To this end, we propose an invertible sharpening network (InvSharpNet) to improve the visual quality of MRI reconstructions. During training, unlike the traditional methods that learn to map the input data to the ground truth, InvSharpNet adapts a backward training strategy that learns a blurring transform from the ground truth (fully-sampled image) to the input data (blurry reconstruction). During inference, the learned blurring transform can be inverted to a sharpening transform leveraging the network’s invertibility. The experiments on various MRI datasets demonstrate that InvSharpNet can improve reconstruction sharpness with few artifacts. The results were also evaluated by radiologists, indicating better visual quality and diagnostic confidence of our proposed method.
S. Dong—This work was done during the internship at United Imaging Intelligence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antun, V., Renna, F., Poon, C., Adcock, B., Hansen, A.C.: On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. 117(48), 30088–30095 (2020)
Behrmann, J., Grathwohl, W., Chen, R.T., Duvenaud, D., Jacobsen, J.H.: Invertible residual networks. In: International Conference on Machine Learning, pp. 573–582. PMLR (2019)
Blau, Y., Michaeli, T.: The perception-distortion tradeoff. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6228–6237 (2018)
Chen, E.Z., Chen, T., Sun, S.: MRI image reconstruction via learning optimization using neural ODEs. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12262, pp. 83–93. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59713-9_9
Duan, J., et al.: VS-Net: variable splitting network for accelerated parallel MRI reconstruction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 713–722. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_78
Jun, Y., Shin, H., Eo, T., Hwang, D.: Joint deep model-based MR image and coil sensitivity reconstruction network (Joint-ICNet) for fast MRI. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5270–5279 (2021)
Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1 \(\times \) 1 convolutions. Adv. Neural Inf. Process. Syst. 31, 1–11 (2018)
Knoll, F., et al.: Deep-learning methods for parallel magnetic resonance imaging reconstruction: a survey of the current approaches, trends, and issues. IEEE Signal Process. Mag. 37(1), 128–140 (2020)
Knoll, F., et al.: Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 FastMRI challenge. Magn. Reson. Med. 84(6), 3054–3070 (2020)
Knoll, F., et al.: fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2(1), e190007 (2020)
Li, W., et al.: Best-buddy GANs for highly detailed image super-resolution. arXiv preprint arXiv:2103.15295 (2021)
Malkiel, I., Ahn, S., Taviani, V., Menini, A., Wolf, L., Hardy, C.J.: Conditional WGANs with adaptive gradient balancing for sparse MRI reconstruction. arXiv preprint arXiv:1905.00985 (2019)
Menon, S., Damian, A., Hu, S., Ravi, N., Rudin, C.: Pulse: self-supervised photo upsampling via latent space exploration of generative models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2437–2445 (2020)
Muckley, M.J., et al.: Results of the 2020 fastMRI challenge for machine learning MR image reconstruction. IEEE Trans. Med. Imaging 40(9), 2306–2317 (2021)
Pezzotti, N., et al.: An adaptive intelligence algorithm for undersampled knee MRI reconstruction. IEEE Access 8, 204825–204838 (2020)
Quan, T.M., Nguyen-Duc, T., Jeong, W.K.: Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss. IEEE Trans. Med. Imaging 37(6), 1488–1497 (2018)
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Schlemper, J., Caballero, J., Hajnal, J.V., Price, A.N., Rueckert, D.: A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37(2), 491–503 (2017)
Seitzer, M., et al.: Adversarial and perceptual refinement for compressed sensing MRI reconstruction. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 232–240. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_27
Sønderby, C.K., Caballero, J., Theis, L., Shi, W., Huszár, F.: Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490 (2016)
Sriram, A., Zbontar, J., Murrell, T., Zitnick, C.L., Defazio, A., Sodickson, D.K.: Grappanet: combining parallel imaging with deep learning for multi-coil MRI reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14315–14322 (2020)
Wang, P., Chen, E.Z., Chen, T., Patel, V.M., Sun, S.: Pyramid convolutional RNN for MRI reconstruction. arXiv preprint arXiv:1912.00543 (2019)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. vol. 2, pp. 1398–1402. IEEE (2003)
Yang, G., Lv, J., Chen, Y., Huang, J., Zhu, J.: Generative adversarial networks (GAN) powered fast magnetic resonance imaging-mini review, comparison and perspectives. arXiv preprint arXiv:2105.01800 (2021)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, S. et al. (2022). Invertible Sharpening Network for MRI Reconstruction Enhancement. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_55
Download citation
DOI: https://doi.org/10.1007/978-3-031-16446-0_55
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16445-3
Online ISBN: 978-3-031-16446-0
eBook Packages: Computer ScienceComputer Science (R0)