Skip to main content

Interpretable Representation Learning of Cardiac MRI via Attribute Regularization

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Abstract

Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models. Several approaches have been recently considered to encode attributes in the latent space to enhance its interpretability. Notably, attribute regularization aims to encode a set of attributes along the dimensions of a latent representation. However, this approach is based on Variational AutoEncoder and suffers from blurry reconstruction. In this paper, we propose an Attributed-regularized Soft Introspective Variational Autoencoder that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder. We demonstrate on short-axis cardiac Magnetic Resonance images of the UK Biobank the ability of the proposed method to address blurry reconstruction issues of variational autoencoder methods while preserving the latent space interpretability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/baiwenjia/ukbb_cardiac/.

  2. 2.

    https://github.com/1Konny/Beta-VAE/.

  3. 3.

    https://taldatech.github.io/soft-intro-vae-web/.

References

  1. Bai, W., Sinclair, M., Tarroni, G., et al.: Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J. Cardiovasc. Magn. Reson. 20(1), 1–12 (2018)

    Article  Google Scholar 

  2. Biffi, C., Cerrolaza, J.J., Tarroni, G., et al.: Explainable anatomical shape analysis through deep hierarchical generative models. IEEE Trans. Med. Imaging 39(6), 2088–2099 (2020)

    Google Scholar 

  3. Cetin, I., Stephens, M., Camara, O., et al.: Attri-VAE: attribute-based interpretable representations of medical images with variational autoencoders. Comput. Med. Imaging Graph. 104, 102158 (2023)

    Google Scholar 

  4. Daniel, T., Tamar, A.: Soft-introVAE: analyzing and improving the introspective variational autoencoder. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  5. Engel, J., Hoffman, M., Roberts, A.: Latent constraints: learning to generate conditionally from unconditional generative models. arXiv:1711.05772 (2017)

  6. Ghosh, P., Sajjadi, M.S., Vergari, A., Black, M., Schölkopf, B.: From variational to deterministic autoencoders. arXiv preprint arXiv:1903.12436 (2019)

  7. Hager, P., Menten, M.J., Rueckert, D.: Best of both worlds: multimodal contrastive learning with tabular and imaging data. In: Conference on Computer Vision and Pattern Recognition (2023)

    Google Scholar 

  8. Higgins, I., Matthey, L., Pal, A., Burgess, C., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: International Conference on Learning Representations (2017)

    Google Scholar 

  9. Huang, H., He, R., Sun, Z., Tan, T., et al.: IntroVAE: introspective variational autoencoders for photographic image synthesis. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  10. Kalatzis, D., Eklund, D., Arvanitidis, G., Hauberg, S.: Variational autoencoders with riemannian brownian motion priors. arXiv preprint arXiv:2002.05227 (2020)

  11. Lample, G., Zeghidour, N., Usunier, N., et al.: Fader networks: manipulating images by sliding attributes. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  12. Liu, W., Li, R., Zheng, M., et al.: Towards visually explaining variational autoencoders. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  13. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  14. Pati, A., Lerch, A.: Attribute-based regularization of latent spaces for variational auto-encoders. Neural Comput. Appl. 33, 4429–4444 (2021)

    Google Scholar 

  15. Petersen, S.E., Matthews, P.M., Francis, J.M., et al.: UK biobank’s cardiovascular magnetic resonance protocol. J. Cardiovasc. Magn. Reson. 18(1), 8 (2016)

    Google Scholar 

  16. Pidhorskyi, S., Almohsen, R., Doretto, G.: Generative probabilistic novelty detection with adversarial autoencoders. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  17. Razavi, A., Van den Oord, A., Vinyals, O.: Generating diverse high-fidelity images with VQ-VAE-2. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  18. Ridgeway, K., Mozer, M.C.: Learning deep disentangled embeddings with the f-statistic loss. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  19. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Google Scholar 

  20. Sønderby, C.K., Raiko, T., Maaløe, L., Sønderby, S.K., Winther, O.: Ladder variational autoencoders. Adv. Neural Inf. Process. Syst. 29 (2016)

    Google Scholar 

  21. Vahdat, A., Kautz, J.: NVAE: a deep hierarchical variational autoencoder. Adv. Neural. Inf. Process. Syst. 33, 19667–19679 (2020)

    Google Scholar 

  22. Xu, H., Luo, D., Henao, R., Shah, S., Carin, L.: Learning autoencoders with relational regularization. In: International Conference on Machine Learning, pp. 10576–10586. PMLR (2020)

    Google Scholar 

  23. Zhang, R., Isola, P., Efros, A.A., et al.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

Download references

This research has been conducted using the UK Biobank Resource under Application Number 87065 and was supported by the German Federal Ministry of Health on the basis of a decision by the German Bundestag, under the frame of ERA PerMed. C.I.B. is in part supported by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxime Di Folco .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 255 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Di Folco, M., Bercea, C.I., Chan, E., Schnabel, J.A. (2024). Interpretable Representation Learning of Cardiac MRI via Attribute Regularization. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15010. Springer, Cham. https://doi.org/10.1007/978-3-031-72117-5_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72117-5_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72116-8

  • Online ISBN: 978-3-031-72117-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics