Skip to main content

Learning Unsupervised Parameter-Specific Affine Transformation for Medical Images Registration

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (MICCAI 2021)

Abstract

Affine registration has recently been formulated using deep learning frameworks to establish spatial correspondences between different images. In this work, we propose a new unsupervised model that investigates two new strategies to tackle fundamental problems related to affine registration. More specifically, the new model 1) has the advantage to explicitly learn specific geometric transformation parameters (e.g. translations, rotation, scaling and shearing); and 2) can effectively understand the context between the images via cross-stitch units allowing feature exchange. The proposed model is evaluated on two two-dimensional X-ray datasets and a three-dimensional CT dataset. Our experimental results show that our model not only outperforms state-of-art approaches and also can predict specific transformation parameters. Our core source code is made available online\(^{1}\)(\(^{1}\)https://github.com/xuuuuuuchen/PASTA).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    \(\mathbf{A} _{2D} = \begin{bmatrix}a_{1} &{} a_{2} &{} a_{3}\\ a_{4} &{} a_{5} &{} a_{6}\\ 0 &{} 0 &{} 1\end{bmatrix}\) and \(\mathbf{A} _{3D} = \begin{bmatrix} a_{1} &{} a_{2} &{} a_{3} &{} a_{4}\\ a_{5} &{} a_{6} &{} a_{7} &{} a_{8}\\ a_{9} &{} a_{10} &{} a_{11} &{} a_{12}\\ 0 &{} 0 &{} 0 &{} 1 \end{bmatrix}\).

  2. 2.

    \(\mathbf{A} \) is subject to the composition order. In this work, we use the order shown in Eq. (1).

  3. 3.

    https://medmnist.github.io/#dataset.

  4. 4.

    https://learn2reg.grand-challenge.org/Datasets/.

References

  1. Aljabar, P., Heckemann, R.A., Hammers, A., Hajnal, J.V., Rueckert, D.: Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy. Neuroimage 46(3), 726–738 (2009)

    Google Scholar 

  2. Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Voxelmorph: a learning framework for deformable medical image registration. IEEE Transactions on Medical Imaging (2019)

    Google Scholar 

  3. Beljaards, L., Elmahdy, M.S., Verbeek, F., Staring, M.: A cross-stitch architecture for joint registration and segmentation in adaptive radiotherapy. In: Medical Imaging with Deep Learning, pp. 62–74. PMLR (2020)

    Google Scholar 

  4. Chee, E., Wu, J.: Airnet: Self-supervised affine registration for 3d medical images using neural networks. arXiv:1810.02583 (2018)

  5. Hering, A., Murphy, K., van Ginneken, B.: Lean2reg challenge: Ct lung registration - training data (2020)

    Google Scholar 

  6. Hu, Y., et al.: Label-driven weakly-supervised learning for multimodal deformable image registration. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1070–1074. IEEE (2018)

    Google Scholar 

  7. Hu, Y., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 49, 1–13 (2018)

    Google Scholar 

  8. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015)

    Google Scholar 

  9. Meng, Y., et al.: Regression of instance boundary by aggregated CNN and GCN. In: European Conference on Computer Vision, pp. 190–207. Springer (2020). https://doi.org/10.1007/978-3-030-58598-3_12

  10. Meng, Yet al.: CNN-GCN aggregation enabled boundary regression for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 352–362. Springer (2020). https://doi.org/10.1007/978-3-030-59719-1_35

  11. Miao, S., Wang, Z.J., Liao, R.: A CNN regression approach for real-time 2d/3d registration. IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016)

    Google Scholar 

  12. Misra, I., Shrivastava, A., Gupta, A., Hebert, M.: Cross-stitch networks for multi-task learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3994–4003 (2016)

    Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  14. Ruder, S., Bingel, J., Augenstein, I., Søgaard, A.: Sluice networks: Learning what to share between loosely related tasks. arXiv:1705.08142 2 (2017)

  15. Tissera, D., Vithanage, K., Wijesinghe, R., Kahatapitiya, K., Fernando, S., Rodrigo, R.: Feature-dependent cross-connections in multi-path neural networks. arXiv:2006.13904 (2020)

  16. de Vos, B.D., Berendsen, F.F., Viergever, M.A., Sokooti, H., Staring, M., Išgum, I.: A deep learning framework for unsupervised affine and deformable image registration. Med. Image Anal. 52, 128–143 (2019)

    Google Scholar 

  17. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017)

    Google Scholar 

  18. Yang, J., Shi, R., Ni, B.: Medmnist classification decathlon: a lightweight automl benchmark for medical image analysis. arXiv:2010.14925 (2020)

  19. Zhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmentation using learned transforms for one-shot medical image segmentation. arXiv preprint arXiv:1902.09383 (2019)

Download references

Acknowledgments

Xu Chen is funded by a studentship jointly funded by the Vascular Surgery Research Fund in Liverpool and Institute of Life Course and Medical Sciences, University of Liverpool, and partially funded by The Great Britain-China Educational Trust (no.269944) administered by the Great Britain-China Centre.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yalin Zheng .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 122 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, X., Meng, Y., Zhao, Y., Williams, R., Vallabhaneni, S.R., Zheng, Y. (2021). Learning Unsupervised Parameter-Specific Affine Transformation for Medical Images Registration. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12904. Springer, Cham. https://doi.org/10.1007/978-3-030-87202-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87202-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87201-4

  • Online ISBN: 978-3-030-87202-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics