Skip to main content
Log in

MMFGAN: A novel multimodal brain medical image fusion based on the improvement of generative adversarial network

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years, the multimodal medical imaging assisted diagnosis and treatment technology has developed rapidly. In brain disease diagnosis, CT-SPECT, MRI-PET and MRI-SPECT fusion images are more favored by brain doctors because they contain both soft tissue structure information and organ metabolism information. Most of the previous medical image fusion algorithms are the migration of other types of image fusion methods and such operations often lose the features of the medical image itself. This paper proposes a multimodal medical image fusion model based on the residual attention mechanism of the generative adversarial network. In the design of the generator, we construct the residual attention mechanism block and the concat detail texture block. After source images are concatenated to a matrix , the matrix is put into two blocks at the same time to extract information such as size, shape, spatial location and texture details. The obtained features are put into the merge block to reconstruct the image. The obtained reconstructed image and source images are respectively put into two discriminators for correction to obtain the final fused image. The model has been experimented on the images of three databases and achieved good fusion results. Qualitative and quantitative evaluations prove that the model is superior to other comparison algorithms in terms of image fusion quality and detail information retention.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Bhardwaj J, Nayak A, Gambhir D (2021) Multimodal Medical Image Fusion Based on Discrete Wavelet Transform and Genetic Algorithm. In: Gupta D, Khanna A, Bhattacharyya S, Hassanien AE, Anand S, Jaiswal A (Eds.), International Conference on Innovative Computing and Communications, Springer Singapore, pp 1047–1057

  2. Chen W, Liu B, Peng S, Sun J, Qiao X (2019) S3D-UNet: Separable 3D U-Net for Brain Tumor Segmentation. In: Crimi A, Bakas S, Kuijf H, Keyvan F, Reyes M, van Walsum T (Eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2018. Lecture Notes in Computer Science, vol 11384. Springer, Cham. https://doi.org/10.1007/978-3-030-11726-9_32

  3. Cui S, Mao L, Jiang J, Liu C, Xiong S (2018) Automatic Semantic Segmentation of Brain Gliomas From MRI Images Using a Deep Cascaded Neural Network. J. Healthc. Eng. 2018:4940593. https://doi.org/10.1155/2018/4940593

    Article  Google Scholar 

  4. Gao T, Wang GY (2020) Brain Signal Classification Based on Deep CNN. International Journal of Security and Privacy in Pervasive Computing 12(2):17–29

    Article  Google Scholar 

  5. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: the 27th International Conference on Neural Information Processing Systems. NIPS, pp 2672–2680

  6. Guo K, Li X, Zang H, Fan T (2020) Multi-modal medical image fusion based on FusionNet in YIQ color space. Entropy 22(12):1423

    Article  MathSciNet  Google Scholar 

  7. Haghighat M, Razian MA (2014) Fast-FMI: non-reference image fusion metric. In : 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT). IEEE, pp 1-3

  8. Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information delity. Inf Fusion 14(2):127–135

    Article  Google Scholar 

  9. Hermessi H, Mourail O, Zagrouba E (2018) Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. NEURAL COMPUT APPL 30(7):2029–2045

    Article  Google Scholar 

  10. Hou R, Zhou D, Nie R, Liu D, Ruan X (2019) Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model. Med Biol Eng Comput 57(4):887–900

    Article  Google Scholar 

  11. Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA (2019) 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 194:105–119

    Article  Google Scholar 

  12. Kumar M, Kaur A, Amita (2018) Improved image fusion of colored and grayscale medical images based on intuitionistic fuzzy sets. Fuzzy Inf. Eng. 10(2):295–306

  13. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  14. Li X, Guo X, Han P, Wang X, Luo T (2020) Laplacian Re-Decomposition for Multimodal Medical Image Fusion. IEEE Trans Instrum Meas 69(9):6880–6890

    Article  Google Scholar 

  15. Li B, Peng H, Wang J (2021) A novel fusion method based on dynamic threshold neural p systems and nonsubsampled contourlet transform for multi-modality medical images. Signal Process. 178:107793

    Article  Google Scholar 

  16. Liu F, Chen L, Lu L, Ahmad A, Jeon G, Yang X (2020) Medical image fusion method by using laplacian pyramid and convolutional sparse representation. Concurrency and Computation: Practice and Experience 32(17):e5632

    Article  Google Scholar 

  17. Liu Y, Chen X, Cheng J, Peng H (2017) A medical image fusion method based on convolutional neural networks. In : 20th International Conference on Information Fusion (Fusion). IEEE, pp 1-7

  18. Liu Y, Liu S, Wang Z (2014) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24(C):147-164

  19. Li H, Wu XJ (2018) Infrared and visible image fusion using Latent Low-Rank Representation. arXiv: 1804.08992

  20. Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11–26

    Article  Google Scholar 

  21. Ma J, Xu H, Jiang J, Mei X, Zhang XP (2020) DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Trans. Image Process. 29:4980–4995

    Article  Google Scholar 

  22. Ouerghi H, Mourali O, Zagrouba E (2018) Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space. IET Image Process. 12(10):1873–1880

    Article  Google Scholar 

  23. Prakash O, Park CM, Khare A, Jeon M, Gwak J (2019) Multiscale fusion of multimodal medical images using lifting scheme based biorthogonal wavelet transform. Optik 182:995–1014

    Article  Google Scholar 

  24. Ramlal SD, Sachdeva J, Ahuja CK, Khandelwal N (2018) Multimodal medical image fusion using non-subsampled shearlet transform and pulse coupled neural network incorporated with morphological gradient. Signal, Image Video Process 12:1479–1487

    Article  Google Scholar 

  25. Sandheep P, Vineeth S, Poulose M, Subha DP (2019) Performance analysis of deep learning CNN in classification of depression EEG signals. In :2019 IEEE Region 10 Conference (TENCON). IEEE, pp 1339-1344

  26. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7-9 May 2015

  27. Tang X, Zhao J, Fu W, Pan J, Zhou H (2019) A Novel Classification Algorithm for MI-EEG based on Deep Learning. In :2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE, pp 606–611

  28. Ullah H, Zhao Y, Wu L, Abdalla FYO, Mkindu H (2019) NSST based MRI-PET/SPE, color image fusion using local features fuzzy rules and NSML in YIQ space. In: IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, pp 1-6

  29. Vanitha K, Satyanarayana D, Prasad MNG (2021) Multi-modal Medical Image Fusion Algorithm Based on Spatial Frequency Motivated PA-PCNN in the NSST Domain. CURRENT MEDICAL IMAGING 17(5):634–643

    Article  Google Scholar 

  30. Wang S, Shen Y (2020) Multi-modal image fusion based on saliency guided in NSCT domain. IET IMAGE PROCESS. 14:3188–3201

    Article  Google Scholar 

  31. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error measurement to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  32. Xia KJ, Yin HS, Wang JQ (2018) A novel improved deep convolutional neural network model for medical fusion. Cluster Comput 22(3):1515–1527

    Google Scholar 

  33. Xu H, Liang P, Yu W, Jiang J, Ma J (2019) Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators. In : Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI). AAAI, pp 3954-3960

  34. Xydeas CS, Petrovi’c V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Article  Google Scholar 

  35. Yang Y, Wu J, Huang S, Fang Y, Lin P, Que Y (2019) Multimodal medical image fusion based on fuzzy discrimination with structural patch decomposition. IEEE J Biomed Health Inform 23(4):1647–1660

    Article  Google Scholar 

  36. Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review. Inf Fusion 40:57–75

    Article  Google Scholar 

  37. Zhao C, Wang T, Lei B (2020) Medical image fusion method based on dense block and deep convolutional generative adversarial network. NEURAL COMPUT APPL. 5:1–16

    Google Scholar 

Download references

Acknowledgements

This research was funded by the National Key Research and Development Project of China under Grant 2019YFC0409105, by the National Natural Science Foundation of China under Grant 61801190, by the Nature Science Foundation of Jilin Province under Grant 20180101055JC, by the Industrial Technology Research and Development Funds of Jilin Province under Grant 2019C054-3, by the “Thirteenth Five-Year Plan” Scientific Research Planning Project of Education Department of Jilin Province (JKH20200678KJ,JJKH20200997KJ), and in part by the Fundamental Research Funds for the Central Universities, JLU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiongfei Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, K., Hu, X. & Li, X. MMFGAN: A novel multimodal brain medical image fusion based on the improvement of generative adversarial network. Multimed Tools Appl 81, 5889–5927 (2022). https://doi.org/10.1007/s11042-021-11822-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11822-y

Keywords

Navigation