Skip to main content

Advertisement

Log in

A sum-modified-Laplacian and sparse representation based multimodal medical image fusion in Laplacian pyramid domain

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

Fusion of multimodal medical images provides complementary information for diagnosis, surgical planning, and clinical outcome evaluation. Although the multiscale decomposition–based fusion methods have attracted much attention among researchers, the challenges of determining the decomposition levels and the loss of contrast hindered their applications. Here, we present a multimodal medical images fusion method combining the sum-modified-Laplacian (SML) with sparse representation (SR) in the Laplacian pyramid domain. In this method, we first transformed the original images into the high-pass and low-pass bands by the Laplacian pyramid (LP). Then, we use SML and SR to fuse the high- and low-pass bands, respectively. The proposed method has been compared with different methods including NSST_VGG_MAX, DWT_ARV_BURTS, CVT_MAX_LIS, and NSCT_SR_MAX. We also conducted multiple experiments on four groups of medical images, including CT and MR, T1-weighted MR and T2-weighted MR, PET and MR, as well as SPECT and MR, to demonstrate the advantages of our method. Visual and quantitative results illustrate that our method can produce the fused images with better brightness contrast and retain more image details than other evaluated methods on the basis of MI, LAB/F, QAB/F, and Qw. Furthermore, our method could preserve more fine and useful functional information with better image contrast, which is highly relevant in the assessment of lesion shapes and positions.

The basic framework of the proposed fusion method based on sum-modified-Laplacian and sparse representation based in Laplacian pyramid domain using CT and MR images.

In the proposed method, the source images are transformed into the low-pass bands and the high-pass bands using the Laplacian pyramid (LP). The low-pass bands are fused by SR, while SML is used to fuse the high-pass bands. Visual and quantitative results show that the proposed method can produce fused images with better brightness contrast and retain more image details than the other compared methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Yang Y, Tong S, Huang S, Lin P (2014) Log-Gabor energy based multimodal medical image fusion in NSCT domain. Comput Math Methods Med 2014:835481. https://doi.org/10.1155/2014/835481

    Article  PubMed  PubMed Central  Google Scholar 

  2. Filippi M, Rocca MA (2013) Present and future of fMRI in multiple sclerosis. Expert Rev Neurother 13(12 Suppl):27–31. https://doi.org/10.1586/14737175.2013.865871

    Article  CAS  PubMed  Google Scholar 

  3. Sarikaya I (2015) PET imaging in neurology: Alzheimer’s and Parkinson’s diseases. Nucl Med Commun 36(8):775–781. https://doi.org/10.1097/MNM.0000000000000320

    Article  CAS  PubMed  Google Scholar 

  4. Hutton BF (2014) The origins of SPECT and SPECT/CT. Eur J Nucl Med Mol Imaging 41(Suppl 1):S3–S16. https://doi.org/10.1007/s00259-013-2606-5

    Article  PubMed  Google Scholar 

  5. Mühlenweg M, Schaefers G, Trattnig S (2015) Physical interactions in MRI: some rules of thumb for their reduction. Radiologe 55(8):638–648. https://doi.org/10.1007/s00117-015-2812-1

    Article  PubMed  Google Scholar 

  6. Diaconis JN, Rao KC (1980) CT in head trauma: a review. J Comput Tomogr 4(4):261–270

    Article  CAS  Google Scholar 

  7. Schellpfeffer MA (2013) Ultrasound imaging in research and clinical medicine. Birth Defects Res C Embryo Today 99(2):83–92. https://doi.org/10.1002/bdrc.21032

    Article  CAS  PubMed  Google Scholar 

  8. Kim T, Rivara FP, Mozingo DW, Lottenberg L, Harris ZB, Casella G, Liu H, Moldawer LL, Efron PA, Ang DN (2012) A regionalised strategy for improving motor vehicle-related highway driver deaths using a weighted averages method. Inj Prev 18(1):16–21. https://doi.org/10.1136/ip.2010.030759

    Article  PubMed  Google Scholar 

  9. Sainani KL (2014) Introduction to principal components analysis. PM R 6(3):275–278. https://doi.org/10.1016/j.pmrj.2014.02.001

    Article  PubMed  Google Scholar 

  10. Gloi AM, Buchanan R (2013) Dosimetric assessment of prostate cancer patients through principal component analysis (PCA). J Appl Clin Med Phys 14(1):3882–3849. https://doi.org/10.1120/jacmp.v14i1.3882

    Article  PubMed  Google Scholar 

  11. Vollnhals F, Audinot JN, Wirtz T, Mercier-Bonin M, Fourquaux I, Schroeppel B, Kraushaar U, Lev-Ram V, Ellisman MH, Eswara S (2017) Correlative microscopy combining secondary ion mass spectrometry and electron microscopy: comparison of intensity-hue-saturation and Laplacian pyramid methods for image fusion. Anal Chem 89(20):10702–10710. https://doi.org/10.1021/acs.analchem.7b01256

    Article  CAS  PubMed  Google Scholar 

  12. Yang Y, Zheng W, Huang S (2014) Effective multifocus image fusion based on HVS and BP neural network. ScientificWorldJournal 2014:281073. https://doi.org/10.1155/2014/281073

    Article  PubMed  PubMed Central  Google Scholar 

  13. Shuaiqi L, Jie Z, Mingzhu S (2015) Medical image fusion based on rolling guidance filter and spiking cortical model. Comput Math Methods Med 2015:156043. https://doi.org/10.1155/2015/156043

    Article  PubMed  PubMed Central  Google Scholar 

  14. Toet A (1989) Image fusion by a ratio of low pass pyramid. Pattern Recogn Lett 9(4):245–253

    Article  Google Scholar 

  15. Burt P, Adelson E (1987) The Laplacian pyramid as a compact image code. Read Comput Vision 31(4):671–679. https://doi.org/10.1109/TCOM.1983.1095851

    Article  Google Scholar 

  16. Petrovic V, Xydeas C (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237

    Article  Google Scholar 

  17. Jmail N, Zaghdoud M, Hadriche A, Frikha T, Ben Amar C, Bénar C (2018) Integration of stationary wavelet transform on a dynamic partial reconfiguration for recognition of pre-ictal gamma oscillations. Heliyon 4(2):e00530. https://doi.org/10.1016/j.heliyon.2018.e00530

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Li H, Manjunath B, Mitra S (1995) Multisensor image fusion using the wavelet transform. Graphical Models Image Proc 57(3):235–245

    Article  Google Scholar 

  19. Lewis J, OCallaghan R, Nikolov S, Bull D, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inform Fusion 8(2):119–130

    Article  Google Scholar 

  20. Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Inform Fusion 8(2):143–156

    Article  Google Scholar 

  21. Petrović VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237

    Article  Google Scholar 

  22. Venkataraman A, Alirezaie J, Babyn P, Ahmadian A (2014) Multi dose computed tomography image fusion based on hybrid sparse methodology. Conf Proc IEEE Eng Med Biol Soc 2014:3901–3904. https://doi.org/10.1109/EMBC.2014.6944476

    Article  PubMed  Google Scholar 

  23. Sun J, Han Q, Kou L, Zhang L, Zhang K, Jin Z (2018) Multi-focus image fusion algorithm based on Laplacian pyramids. J Opt Soc Am A Opt Image Sci Vis 35(3):480–490. https://doi.org/10.1364/JOSAA.35.000480

    Article  PubMed  Google Scholar 

  24. Zhang J, Zhao D, Gao W (2014) Group-based sparse representation for image restoration. IEEE Trans Image Process 23(8):3336–3351. https://doi.org/10.1109/TIP.2014.2323127

    Article  PubMed  Google Scholar 

  25. Ptucha R, Savakis AE (2014) LGE-KSVD: robust sparse representation classification. IEEE Trans Image Process 23(4):1737–1750. https://doi.org/10.1109/TIP.2014.2303648.

    Article  PubMed  Google Scholar 

  26. Chen L, Li J, Chen CL (2013) Regional multifocus image fusion using sparse representation. Opt Express 21(4):5182–5197. https://doi.org/10.1364/OE.21.005182

    Article  PubMed  Google Scholar 

  27. Lan X, Ma AJ, Yuen PC, Chellappa R (2015) Joint sparse representation and robust feature-level fusion for multi-cue visual tracking. IEEE Trans Image Process 24(12):5826–5841. https://doi.org/10.1109/TIP.2015.2481325

    Article  PubMed  Google Scholar 

  28. Wu G, Chen Y, Wang Y, Yu J, Lv X, Ju X, Shi Z, Chen L, Chen Z (2018) Sparse representation-based radiomics for the diagnosis of brain tumors. IEEE Trans Med Imaging 37(4):893–905. https://doi.org/10.1109/TMI.2017.2776967

    Article  PubMed  Google Scholar 

  29. Qiu C, Wang Y, Zhang H, Xia S (2017) Image fusion of CT and MR with sparse representation in NSST domain. Comput Math Methods Med 2017:9308745. https://doi.org/10.1155/2017/9308745

    Article  PubMed  PubMed Central  Google Scholar 

  30. Yang Y, Tong S, Huang S, Lin P (2014) Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors (Basel) 14(12):22408–22430. https://doi.org/10.3390/s141222408.

    Article  Google Scholar 

  31. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164

    Article  Google Scholar 

  32. Yang C, Zhang JQ, Wang XR, Liu X (2008) A novel similarity based quality metric for image fusion. Inform Fusion 9:156–160

    Article  Google Scholar 

  33. Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  34. Atlas, the Whole Brain. http://www.med.harvard.edu/aanlib/home.html. Accessed 12 Aug 2019

Download references

Funding

This work was supported by the National Natural Science Foundation of China (grant no. 81571754) and partly supported by the Major National Scientific Instrument and Equipment Development Project (grant no. 2013YQ160551).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingyue Ding.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Zhang, X. & Ding, M. A sum-modified-Laplacian and sparse representation based multimodal medical image fusion in Laplacian pyramid domain. Med Biol Eng Comput 57, 2265–2275 (2019). https://doi.org/10.1007/s11517-019-02023-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11517-019-02023-9

Keywords

Navigation