Skip to main content

FusionINN: Decomposable Image Fusion for Brain Tumor Monitoring

  • Conference paper
  • First Online:
Trustworthy Artificial Intelligence for Healthcare (TAI4H 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14812))

  • 296 Accesses

Abstract

Image fusion typically employs non-invertible neural networks to merge multiple source images into a single fused image. However, for clinical experts, solely relying on fused images may be insufficient for making diagnostic decisions, as the fusion mechanism blends features from source images, thereby making it difficult to interpret the underlying tumor pathology. We introduce FusionINN, a novel decomposable image fusion framework, capable of efficiently generating fused images and also decomposing them back to the source images.  FusionINN is designed to be bijective by including a latent image alongside the fused image, while ensuring minimal transfer of information from the source images to the latent representation. To the best of our knowledge, we are the first to investigate the decomposability of fused images, which is particularly crucial for life-sensitive applications such as medical image fusion compared to other tasks like multi-focus or multi-exposure image fusion. Our extensive experimentation validates FusionINN over existing discriminative and generative fusion methods, both subjectively and objectively. Moreover, compared to a recent denoising diffusion-based fusion model, our approach offers faster and qualitatively better fusion results. The source code of the FusionINN framework is available at: https://github.com/nish03/FusionINN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bitar, R., et al.: MR pulse sequences: what every radiologist wants to know but is afraid to ask. Radiographics 26(2), 513–537 (2006)

    Article  Google Scholar 

  2. Xu, Q., Zou, Y., Zhang, X.F.: Sertoli-Leydig cell tumors of ovary: a case series. Medicine 97(42), e12865 (2018)

    Article  Google Scholar 

  3. Ram Prabhakar, K., Sai Srikar, V., Venkatesh Babu, R.: DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4714–4722 (2017)

    Google Scholar 

  4. Xu, H., Fan, F., Zhang, H., Le, Z., Huang, J.: A deep model for multi-focus image fusion based on gradients and connected regions. IEEE Access 8, 26316–26327 (2020)

    Article  Google Scholar 

  5. Kumar, N., Hoffmann, N., Oelschlägel, M., Koch, E., Kirsch, M., Gumhold, S.: Structural similarity based anatomical and functional brain imaging fusion. In: Zhu, D., et al. (eds.) MBIA/MFCA -2019. LNCS, vol. 11846, pp. 121–129. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33226-6_14

    Chapter  Google Scholar 

  6. Liu, Y., Chen, X., Cheng, J., Peng, H.: A medical image fusion method based on convolutional neural networks. In: 2017 20th International Conference on Information Fusion (Fusion), pp. 1–7. IEEE, July 2017

    Google Scholar 

  7. Zhang, Y., Liu, Y., Sun, P., Yan, H., Zhao, X., Zhang, L.: IFCNN: a general image fusion framework based on convolutional neural network. Inf. Fusion 54, 99–118 (2020)

    Article  Google Scholar 

  8. Kumar, N., Hoffmann, N., Oelschlägel, M., Koch, E., Kirsch, M., Gumhold, S.: Multimodal medical image fusion by optimizing learned pixel weights using structural similarity index. In: EMBC (2019)

    Google Scholar 

  9. Ma, J., Yu, W., Liang, P., Li, C., Jiang, J.: FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf. Fusion 48, 11–26 (2019)

    Article  Google Scholar 

  10. Zhao, Z., et al.: DDFM: denoising diffusion model for multi-modality image fusion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8082–8093 (2023)

    Google Scholar 

  11. Liu, Y., Chen, X., Wang, Z., Wang, Z.J., Ward, R.K., Wang, X.: Deep learning for pixel-level image fusion: recent advances and future prospects. Inf. Fusion 42, 158–173 (2018)

    Article  Google Scholar 

  12. Zhang, X., Liu, A., Jiang, P., Qian, R., Wei, W., Chen, X.: MSAIF-Net: a multi-stage spatial attention based invertible fusion network for MR images. IEEE Trans. Instrum. Meas. (2023)

    Google Scholar 

  13. Cui, J., Zhou, L., Li, F., Zha, Y.: Visible and infrared image fusion by invertible neural network. In: China Conference on Command and Control, vol. 949, pp. 133–145. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-6052-9_13

  14. Wang, Y., Liu, R., Li, Z., Wang, S., Yang, C., Liu, Q.: Variable augmented network for invertible modality synthesis and fusion. IEEE J. Biomed. Health Inf. (2023)

    Google Scholar 

  15. Wang, W., Deng, L.J., Ran, R., Vivone, G.: A general paradigm with detail-preserving conditional invertible network for image fusion. Int. J. Comput. Vision 132(4), 1029–1054 (2024)

    Article  Google Scholar 

  16. Zhao, Z., et al.: CDDFuse: correlation-driven dual-branch feature decomposition for multi-modality image fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5906–5916 (2023)

    Google Scholar 

  17. Lu, H., She, Y., Tie, J., Xu, S.: Half-UNet: a simplified U-Net architecture for medical image segmentation. Front. Neuroinform. 16, 911679 (2022)

    Article  Google Scholar 

  18. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  19. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1

    Chapter  Google Scholar 

  20. Huang, H., et al.: UNet 3+: a full-scale connected UNet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1055–1059. IEEE, May 2020

    Google Scholar 

  21. Dinh, L., Sohl-Dickstein, J., Bengio, S.: Density estimation using real NVP. In: International Conference on Learning Representations, November 2016

    Google Scholar 

  22. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  23. Petrovic, V., Xydeas, C.: Objective image fusion performance characterisation. In: Tenth IEEE International Conference on Computer Vision (ICCV 2005), vol. 1, pp. 1866–1871. IEEE, October 2005

    Google Scholar 

  24. Taghikhah, M., Kumar, N., Šegvić, S., Eslami, A., Gumhold, S.: Quantile-based maximum likelihood training for outlier detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 19, pp. 21610–21618, March 2024

    Google Scholar 

  25. Piella, G., Heijmans, H.: A new quality metric for image fusion. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), vol. 3, pp. III–173. IEEE, September 2003

    Google Scholar 

  26. Wang, Q., Shen, Y., Jin, J.: Performance evaluation of image fusion techniques. In: Image Fusion: Algorithms and Applications, vol. 19, pp. 469–492 (2008)

    Google Scholar 

  27. Kumar, N., Šegvić, S., Eslami, A., Gumhold, S.: Normalizing flow based feature synthesis for outlier-aware object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5156–5165 (2023)

    Google Scholar 

  28. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2016)

    Article  Google Scholar 

  29. Kumar, N., Gumhold, S.: FuseVis: interpreting neural networks for image fusion using per-pixel saliency visualization. Computers 9(4), 98 (2020)

    Article  Google Scholar 

  30. Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13(1), 723–773 (2012)

    MathSciNet  Google Scholar 

  31. Ardizzone, L., et al.: Analyzing inverse problems with invertible neural networks. arXiv preprint arXiv:1808.04730 (2018)

  32. Kumar, N., Hoffmann, N., Kirsch, M., Gumhold, S.: Visualization of medical image fusion and translation for accurate diagnosis of high grade gliomas. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1–5. IEEE, April 2020

    Google Scholar 

  33. Haghighat, M.B.A., Aghagolzadeh, A., Seyedarabi, H.: A non-reference image fusion metric based on mutual information of image features. Comput. Electric. Eng. 37(5), 744–756 (2011)

    Article  Google Scholar 

  34. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Article  Google Scholar 

  35. Jacobsen, J.H., Smeulders, A., Oyallon, E.: i-RevNet: deep invertible networks. arXiv preprint arXiv:1802.07088 (2018)

  36. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Download references

Acknowledgments

This work was primarily supported by the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Germany. The work was also partially funded by DFG as part of TRR 248 – CPEC (grant 389792660) and the Cluster of Excellence CeTI (EXC2050/1, grant 390696704). The authors gratefully acknowledge the Center for Information Services and HPC (ZIH) at TU Dresden for providing computing resources.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nishant Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, N. et al. (2024). FusionINN: Decomposable Image Fusion for Brain Tumor Monitoring. In: Chen, H., Zhou, Y., Xu, D., Vardhanabhuti, V.V. (eds) Trustworthy Artificial Intelligence for Healthcare. TAI4H 2024. Lecture Notes in Computer Science, vol 14812. Springer, Cham. https://doi.org/10.1007/978-3-031-67751-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-67751-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-67750-2

  • Online ISBN: 978-3-031-67751-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics