Skip to main content

MoRA: LoRA Guided Multi-modal Disease Diagnosis with Missing Modality

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15003))

  • 1775 Accesses

Abstract

Multi-modal pre-trained models efficiently extract and fuse features from different modalities with low memory requirements for fine-tuning. Despite this efficiency, their application in disease diagnosis is under-explored. A significant challenge is the frequent occurrence of missing modalities, which impairs performance. Additionally, fine-tuning the entire pre-trained model demands substantial computational resources. To address these issues, we introduce Modality-aware Low-Rank Adaptation (MoRA), a computationally efficient method. MoRA projects each input to a low intrinsic dimension but uses different modality-aware up-projections for modality-specific adaptation in cases of missing modalities. Practically, MoRA integrates into the first block of the model, significantly improving performance when a modality is missing. It requires minimal computational resources, with less than 1.6% of the trainable parameters needed compared to training the entire model. Experimental results show that MoRA outperforms existing techniques in disease diagnosis, demonstrating superior performance, robustness, and training efficiency. The code link is: https://github.com/zhiyiscs/MoRA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kim, Wonjae, Bokyung Son, and Ildoo Kim. "Vilt: Vision-and-language transformer without convolution or region supervision." International Conference on Machine Learning. PMLR, 2021.

    Google Scholar 

  2. Li, Junnan, et al. "Align before fuse: Vision and language representation learning with momentum distillation." Advances in neural information processing systems 34, 2021.

    Google Scholar 

  3. Moon, Jong Hak, et al. "Multi-modal understanding and generation for medical images and text via vision-language pre-training." IEEE Journal of Biomedical and Health Informatics, 2022.

    Google Scholar 

  4. Chen, Zhihong, et al. "Multi-modal masked autoencoders for medical vision-and-language pre-training." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2022.

    Google Scholar 

  5. Wu, Zhenbang, et al. "Multimodal Patient Representation Learning with Missing Modalities and Labels." The Twelfth International Conference on Learning Representations. 2023.

    Google Scholar 

  6. Lee, Yi-Lun, et al. "Multimodal Prompting with Missing Modalities for Visual Recognition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.

    Google Scholar 

  7. Jang, Jaehyuk, Yooseung Wang, and Changick Kim. "Towards Robust Multimodal Prompting With Missing Modalities." arXiv preprint arXiv:2312.15890, 2023.

  8. Ma, Mengmeng, et al. "Are multimodal transformers robust to missing modality?." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.

    Google Scholar 

  9. Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685, 2021.

  10. Zhang, Chaohe, et al. "M3Care: Learning with Missing Modalities in Multimodal Healthcare Data." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022.

    Google Scholar 

  11. Zhou, Tongxue, et al. "Latent correlation representation learning for brain tumor segmentation with missing MRI modalities." IEEE Transactions on Image Processing 30 (2021): 4263-4274.

    Google Scholar 

  12. Ma, Mengmeng, et al. "Smil: Multimodal learning with severely missing modality." Proceedings of the AAAI Conference on Artificial Intelligence, 2021.

    Google Scholar 

  13. Chen, Jiayi, and Aidong Zhang. "Hgmf: heterogeneous graph-based fusion for multimodal data with incompleteness." Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020.

    Google Scholar 

  14. Woo, Sangmin, et al. "Towards good practices for missing modality robust action recognition." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 3. 2023.

    Google Scholar 

  15. Wang, Shuai, et al. "Prototype knowledge distillation for medical segmentation with missing modality." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.

    Google Scholar 

  16. Zuo, Haolin, et al. "Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.

    Google Scholar 

  17. Li, Lei, et al. "Multi-modality cardiac image computing: A survey." Medical Image Analysis (2023): 102869.

    Google Scholar 

  18. Zhou, Tongxue, et al. "Latent correlation representation learning for brain tumor segmentation with missing MRI modalities." IEEE Transactions on Image Processing 30 (2021): 4263-4274.

    Google Scholar 

  19. Zhao, Jinming, Ruichen Li, and Qin Jin. "Missing modality imagination network for emotion recognition with uncertain missing modalities." Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.

    Google Scholar 

  20. Singh, Sonit, et al. "From chest x-rays to radiology reports: a multimodal machine learning approach." 2019 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2019.

    Google Scholar 

  21. Li, Ning, et al. "A benchmark of ocular disease intelligent recognition: One shot for multi-disease detection." Benchmarking, Measuring, and Optimizing: Third BenchCouncil International Symposium, Bench 2020, Virtual Event, November 15–16, 2020.

    Google Scholar 

Download references

Acknowledgments

We thank all affiliates of the Harvard Visual Computing Group for their valuable feedback. This work was supported by NIH grant R01HD104969 and NIH grant 1U01NS132158.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hanspeter Pfister .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shi, Z., Kim, J., Li, W., Li, Y., Pfister, H. (2024). MoRA: LoRA Guided Multi-modal Disease Diagnosis with Missing Modality. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15003. Springer, Cham. https://doi.org/10.1007/978-3-031-72384-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72384-1_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72383-4

  • Online ISBN: 978-3-031-72384-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics