skip to main content
10.1145/3513142.3513197acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciteeConference Proceedingsconference-collections
research-article

Medical Image Fusion Using a Convolution Analysis Operator

Authors Info & Claims
Published:13 April 2022Publication History

ABSTRACT

In contrast to ”synthetic” models such as convolutional sparse representation(CSR)/convolutional dictionary learning(CDL), convolutional analytic operator learning(CAOL) based on convergent block proximal extrapolation gradient with majorizer(BPEG-M), which uses unsupervised learning to train automatically encoded CNNs to compensate for large memory requirements of patch-based learning and to solve block multinonconvex problems more accurately, is the latest optimization framework for solving block multinonconvex problems. A learning model named CAOL is introduced into medical image fusion to overcome patch-based defects in this paper. In our work, the high-frequency component is fused by using the BPEG-M approach, while the low-pass component uses the ”choose-max” strategy. In experiments, we used 5 types of medical brain images and 4 popular fusion methods to verify the effectiveness of the proposed method. The experimental results show that our approach is successful in terms of qualitative and quantitative analysis.

References

  1. [1] Deepika, T., and G. K. Kannan, Gambhir D, ”A Novel adaptive optimization of Dual-Tree Complex Wavelet Transform for Medical Image Fusion”, arXiv,2020.Google ScholarGoogle Scholar
  2. [2] KUNAL KISHORE PANDEY, SHEKHAR R SURALKAR, ”Medical image fusion using curvelet transform.” International Journal of Electronics Communication Engineering Technology.,2013.Google ScholarGoogle Scholar
  3. [3] Z. Zhu, M. Zheng, G. Qi, D. Wang and Y. Xiang, ”A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain,” IEEE Access7, 20811–20824, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Huang C, Tian G, Lan Y, Peng Y, Ng EYK, Hao Y, Cheng Y and Che W, ”A New Pulse Coupled Neural Network (PCNN) for Brain Medical Image Fusion Empowered by Shuffled Frog Leaping Algorithm,” Front. Neurosci., 2019.Google ScholarGoogle Scholar
  5. [5] Ming Yin, Xiaoning Liu, Yu Liu, Xun Chen, ”Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain”, IEEE Transactions on Instrumentation and Measurement, 68(1),49-64, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Maqsood, Sarmad and Javed, Umer, ”Multi-modal Medical Image Fusion based on Two-scale Image Decomposition and Sparse Representation”, Biomedical Signal Processing and Control, vol. 57, 2020.Google ScholarGoogle Scholar
  7. [7] Liu,Yu and Liu,Shuping and Wang,Zengfu, ”A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion,vol. 24, pp. 147-164, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Yin, Haitao and Li, Shutao, ”Multimodal image fusion with joint sparsity model,” Optical Engineering, vol. 50, 2011.Google ScholarGoogle Scholar
  9. [9] Yang, Bin and Li, Shutao, ”Visual attention guided image fusion with sparse representation,” Optik - International Journal for Light and Electron Optics, vol. 125, pp.4881-4888,2014.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Gao, Zhisheng and Zhang, Chengfang, ”Texture clear multi-modal image fusion with joint sparsity model,” Optik - International Journal for Light and Electron Optics, vol. 121305, pp.S0030402616310725,2016.Google ScholarGoogle Scholar
  11. [11] ZHANG C F,Yi L Z, ”Multimodal image fusion with adaptive joint sparsity model,” J. Electron. Imaging, vol. 28, pp.013043,2019.Google ScholarGoogle Scholar
  12. [12] Chengfang Zhang, Ziliang Feng, Zhisheng Gao, Xin Jin, Dan Yan, Liangzhong Yi, ”Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning,” Opt. Eng, vol. 59, pp.051402,2020.Google ScholarGoogle Scholar
  13. [13] Zong Jingjing,Qiu Tianshuang, ”Brain Image Fusion Based on Online Dictionary Learning and Pulse Coupled Neural Network,” Chinese Journal of Biomedical Engineering,34(5): 540-547,2015.Google ScholarGoogle Scholar
  14. [14] B. Wohlberg, ”Efficient algorithms for convolutional sparse representations,” IEEE Trans. Image Process,25, pp.301–315,2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Garcia-Cardona, Cristina and Wohlberg, Brendt, ”Convolutional Dictionary Learning: A Comparative Review and New Algorithms,” IEEE Transactions on Computational Imaging,vol. 4,no.3, pp.366–381,2018.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Il and Yong and Chun and Jeffrey and A and Fessler,”Convolutional Dictionary Learning: Acceleration and Convergence”,IEEE Transactions on Image Processing,VOL. 27, NO. 4, pp.1697-1712, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Chun, Il Yong and Fessler, Jeffrey A,”Convergent convolutional dictionary learning using Adaptive Contrast Enhancement (CDL-ACE): Application of CDL to image denoising”,2017 International Conference on Sampling Theory and Applications (SampTA),2017.Google ScholarGoogle Scholar
  18. [18] Y. Liu, X. Chen, and R. K. Ward, ”Image fusion with convolutional sparse representation,”IEEE Signal Process. Lett.vol. 23, pp.1882–1886,2016.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Liu and Yu and Chen and Xun and Ward and Rabab and K. and Wang and Z. and Jane,”Medical Image Fusion via Convolutional Sparsity Based Morphological Component Analysis,”IEEE Signal Processing Letters,26(3),485-489,2019.Google ScholarGoogle Scholar
  20. [20] Jingming Xia, Yi Lu, and Ling Tan,”Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation,”1-13,2020.Google ScholarGoogle Scholar
  21. [21] www.med.harvard.edu/aanlib/home.html.Google ScholarGoogle Scholar
  22. [22] F. Heide, W. Heidrich, and G. Wetzstein, ”Fast and flexible convolutional sparse coding,” in Proc. IEEE CVPR, Boston, MA, Jun. 2015, pp.5135–5143.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, ”Deconvolutional networks,”in Proc. IEEE CVPR, San Francisco, CA, Jun. 2010,pp. 2528–2535.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Z. Liu et al., ”Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study,” IEEE Trans. Pattern Anal. Machine Intell.34(1), 94–109 (2011). pp.5135–5143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Xydeas, C. S., and P. V. V.. ”Objective image fusion performance measure.” Military Technical Courier 56.4(2000):181-193.Google ScholarGoogle Scholar
  26. [26] Piella, Gemma, and H. Heijmans. ”A new quality metric for image fusion.” International Conference on Image Processing IEEE, 2003.Google ScholarGoogle Scholar
  27. [27] Zhao, Jiying, R. Laganiere, and Z. Liu. ”Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement.” International Journal of Innovative Computing Information Control Ijicic 3.6(2006).Google ScholarGoogle Scholar
  28. [28] https://sites.google.com/site/yuliu316316Google ScholarGoogle Scholar
  29. [29] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ”Distributed optimization and statistical learning via the alternating direction method of multipliers,”Foundations and Trends in Machine Learning, vol. 3,no. 1, pp. 1C122, 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Il Yong Chun and Jeffrey A. Fessler, ”Convolutional analysis operator learning: Acceleration and convergence,”IEEE Trans. Image Process., 29(1): 2108-2122, 2020.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICITEE '21: Proceedings of the 4th International Conference on Information Technologies and Electrical Engineering
    October 2021
    477 pages
    ISBN:9781450386494
    DOI:10.1145/3513142

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format