skip to main content
10.1145/3513142.3513195acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciteeConference Proceedingsconference-collections
research-article

Medical image fusion using convolution dictionary learning with Adaptive Contrast Enhancement

Authors Info & Claims
Published:13 April 2022Publication History

ABSTRACT

Medical image fusion uses specific techniques to complement the information from different modalities of medical images to facilitate more accurate assistance in clinical applications. In this paper, a convolutional dictionary learning (CDL-ACE) medical image fusion method based on adaptive contrast enhancement is proposed. The proposed method takes advantage of CDL-ACE to compensate for model mismatch, reduce artifact generation and better match visual observations. Eight pairs of brain images are tested, and 4 representative methods are also compared to verify the performance of our approach. The average values of MI, Q0, QMI,QTE and QNCIE are 3.4275, 0.3949, 0.6705, 0.4431, and 0.8090 for the “city” training set and 3.4282, 0.3915, 0.6703, 0.4430, and 0.8090 for the “fruit” training set, respectively. In addition, our method improves on each metric by 3.14%, 12.79%, 4.13%, 5.25%, and 0.04% and 3.16%, 11.83%,4.10%, 5.22%, and 0.04%, respectively, in comparison with a CNN. The experimental results show that the method can achieve impressive performance in both subjective and objective visual evaluations.

References

  1. [1] K. VeeraSwamy, B. Ashwanth, “Medical Image Fusion using NSCT.” Journal of critical reviews.,7(10), 3076-3085,2021.Google ScholarGoogle Scholar
  2. [2] Zhao, M., Peng, Y. A Multi-module Medical Image Fusion Method Based on Non-subsampled Shear Wave Transformation and Convolutional Neural Network. Sens Imaging 22, 9 (2021). https://doi.org/10.1007/s11220-021-00330-w.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Sarmad Maqsood, Umer Javed, “Multi-modal Medical Image Fusion based on Two-scale Image Decomposition and Sparse Representation,” Biomedical Signal Processing and Control57, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Gao, Zhisheng and Zhang, Chengfang, “Texture clear multi-modal image fusion with joint sparsity model,” Optik - International Journal for Light and Electron Optics, vol. 121305, pp.S0030402616310725,2016.Google ScholarGoogle Scholar
  5. [5] ZHANG C F,Yi L Z, “Multimodal image fusion with adaptive joint sparsity model,” J. Electron. Imaging, vol. 28, pp.013043,2019.Google ScholarGoogle Scholar
  6. [6] Chengfang Zhang, Ziliang Feng, Zhisheng Gao, Xin Jin, Dan Yan, Liangzhong Yi, “Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning,” Opt. Eng, vol. 59, pp.051402,2020.Google ScholarGoogle Scholar
  7. [7] Liu,Yu and Liu,Shuping and Wang,Zengfu, “A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion,vol. 24, pp. 147-164, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] B. Wohlberg, “Efficient algorithms for convolutional sparse representations,” IEEE Trans. Image Process,25, pp.301–315,2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Y. Liu, X. Chen, and R. K. Ward, “Image fusion with convolutional sparse representation,”IEEE Signal Process. Lett.vol. 23, pp.1882–1886,2016.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Lifang Wang, Chaoyu Shi, Suzhen Lin, Pinle Qin and Yanli Wang,”Convolutional Sparse Representation and Local Density Peak Clustering for Medical Image Fusion,”International Journal of Pattern Recognition and Artificial Intelligence,vol. 34,no.7,2020.Google ScholarGoogle Scholar
  11. [11] Yu Liu, Xun Chen, Juan Cheng, Hu Peng,”A medical image fusion method based on convolutional neural networks”,2017 20th International Conference on Information Fusion (Fusion),2017.Google ScholarGoogle Scholar
  12. [12] Zhuliang Le,Jun Huang,Fan Fan,Xin Tian,”A Generative Adversarial Network For Medical Image Fusion”,2020 IEEE International Conference on Image Processing (ICIP),2020.Google ScholarGoogle Scholar
  13. [13] Shutao Li, Xudong Kang,Jianwen Hu,”Image Fusion with Guided Filtering”,IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 7,pp.2864-2875,2013.Google ScholarGoogle Scholar
  14. [14] Il and Yong and Chun and Jeffrey and A and Fessler,”Convolutional Dictionary Learning: Acceleration and Convergence”,IEEE Transactions on Image Processing,VOL. 27, NO. 4, pp.1697-1712, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] F. Heide, W. Heidrich, and G. Wetzstein, “Fast and flexible convolutional sparse coding,” in Proc. IEEE CVPR, Boston, MA, Jun. 2015, pp.5135–5143.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, “Deconvolutional networks,”in Proc. IEEE CVPR, San Francisco, CA, Jun. 2010,pp. 2528–2535.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] www.med.harvard.edu/aanlib/home.html.Google ScholarGoogle Scholar
  18. [18] Z. Liu et al., “Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study,” IEEE Trans. Pattern Anal. Machine Intell.34(1), 94–109 (2011). pp.5135–5143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Zhou Wang, Alan C. Bovik, “A Universal Image Quality Index.” IEEE SIGNAL PROCESSING LETTERS, VOL. 9, NO. 3,pp.81-84,2002.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] G. Qu, D. Zhang, and P. Yan, “Information Measure for Performance of Image Fusion,” Electronics Letters, vol. 38, no. 7, pp. 313-315, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] N. Cvejic, A. Loza, D. Bul, and N. Canagarajah, “A Similarity Metric for Assessment of Image Fusion Algorithms,” Int l J. Signal Processing, vol. 2, no. 3, pp. 178-182, 2005.Google ScholarGoogle Scholar
  22. [22] Q. Wang, Y. Shen, and J. Jin, “Performance Evaluation of Image Fusion Techniques,” Image Fusion: Algorithms and Applications, ch. 19, T. Stathaki, ed., pp. 469-492. Elsevier, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Feiqiang Liu,Lihui Chen,Lu Lu,Awais Ahmad,Gwanggil Jeon,Xiaomin Yang, “Medical image fusion method by using Laplacian pyramid and convolutional sparse representation,” Concurrency and Computation Practice and Experience,32(7),2020.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICITEE '21: Proceedings of the 4th International Conference on Information Technologies and Electrical Engineering
    October 2021
    477 pages
    ISBN:9781450386494
    DOI:10.1145/3513142

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)1

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format