Skip to main content

Super-Resolution by Latent Space Exploration: Training with Poorly-Aligned Clinical and Micro CT Image Dataset

  • Conference paper
  • First Online:
Book cover Simulation and Synthesis in Medical Imaging (SASHIMI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12965))

Included in the following conference series:

  • 943 Accesses

Abstract

This paper proposes a super-resolution (SR) method, for performing SR on a poorly-aligned dataset. Super-resolution methods commonly needs aligned low-resolution (LR) and high-resolution (HR) images for training. For obtaining paired LR and HR images in medical imaging, we need to align low and high-resolution data using image registration technology. However, since the hardness of aligning LR and HR images, the aligned LR-HR dataset is always low quality. Conventional SR methods always fail to train using poorly-aligned datasets since these methods need high-quality LR-HR datasets. To tackle this problem, we propose a two-step framework for SR using poorly-aligned datasets. In the first step, we decompose image representation into two parts: one is a content code that captures the image content; the other is a style code that captures the image style and anatomy difference between LR / HR images. To perform SR of a given LR image, we input the content code and a latent variable simultaneously into the SR network to obtain an SR result. In the second step, using the trained SR network and an LR image, we search for a content code, and a style code for generating the most proper SR image. This is conducted by searching for the best content code and the best style code by latent space exploration. We conducted experiments using a poorly-aligned clinical-micro CT lung specimen dataset. Experimental results illustrated the proposed method outperformed conventional SR methods by increasing SSIM from 0.309 to 0.312, and have much more convincing perceptual quality than conventional SR methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bebis, G., Georgiopoulos, M., da Vitoria Lobo, N., Shah, M.: Learning affine transformations. Pattern Recogn. 32(10), 1783–1799 (1999). https://doi.org/10.1016/S0031-3203(98)00178-2

    Article  Google Scholar 

  2. Chen, Y., Shi, F., Christodoulou, A.G., Xie, Y., Zhou, Z., Li, D.: Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 91–99. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_11

    Chapter  Google Scholar 

  3. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015). https://doi.org/10.1109/TPAMI.2015.2439281

    Article  Google Scholar 

  4. Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1664–1673. IEEE (2018). https://doi.org/10.1109/CVPR.2018.00179

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016). https://doi.org/10.1109/CVPR.2016.90

  6. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510. IEEE (2017). https://doi.org/10.1109/ICCV.2017.167

  7. Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 179–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_11

    Chapter  Google Scholar 

  8. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. IEEE (2019). https://doi.org/10.1109/CVPR.2019.00453

  9. Keszei, A.P., Berkels, B., Deserno, T.M.: Survey of Non-Rigid Registration Tools in Medicine. J. Digit. Imaging 30(1), 102–116 (2016). https://doi.org/10.1007/s10278-016-9915-8

    Article  Google Scholar 

  10. Keys, R.: Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 29(6), 1153–1160 (1981). https://doi.org/10.1109/TASSP.1981.1163711

    Article  MathSciNet  MATH  Google Scholar 

  11. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (2014)

    Google Scholar 

  12. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690. IEEE (2017). https://doi.org/10.1109/CVPR.2017.19

  13. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144. IEEE (2017). https://doi.org/10.1109/CVPRW.2017.151

  14. Markarian, B.: Preparation of inflated lung specimens. The lung: Radiologic-pathologic Correlation, pp. 4–12 (1984). https://doi.org/10.1111/j.1740-8261.1983.tb01539.x

  15. Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013). https://doi.org/10.1109/TMI.2013.2265603

    Article  Google Scholar 

  16. Wang, Z., Chen, J., Hoi, S.C.: Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intell. (2020). https://doi.org/10.1109/TPAMI.2020.2982166

  17. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

Download references

Acknowledgements

Parts of this work was supported by MEXT/JSPS KAKENHI (26108006, 17H00867, 17K20099), the JSPS Bilateral International Collaboration Grants, the AMED (JP19lk1010036 and JP20lk1010036) and the Hori Sciences & Arts Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tong Zheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, T., Oda, H., Hayashi, Y., Nakamura, S., Oda, M., Mori, K. (2021). Super-Resolution by Latent Space Exploration: Training with Poorly-Aligned Clinical and Micro CT Image Dataset. In: Svoboda, D., Burgos, N., Wolterink, J.M., Zhao, C. (eds) Simulation and Synthesis in Medical Imaging. SASHIMI 2021. Lecture Notes in Computer Science(), vol 12965. Springer, Cham. https://doi.org/10.1007/978-3-030-87592-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87592-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87591-6

  • Online ISBN: 978-3-030-87592-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics