skip to main content
10.1145/3513142.3513196acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciteeConference Proceedingsconference-collections
research-article

Visible and infrared Image Fusion via Convolution Analysis Operator

Authors Info & Claims
Published:13 April 2022Publication History

ABSTRACT

As a synthetic model, conventional convolutional sparse representation (CSR)/convolutional dictionary learning (CDL) suffers from model mismatch. This deficiency leads to loss of details and smoothing in infrared-visible fusion results with CDL/CSR methods. Based on study of analytic operator learning, Chun introduces the idea of convolution into an ”analytic” signal model and proposes the convolutional analytic operator learning (CAOL) framework. CAOL uses the convergent block proximal extrapolation gradient method (BPEG-M) with majorizer to solve the model mismatch problem. To avoid the shortcomings of CDL/CSR-based fusion methods, this paper introduces the CAOL idea to image fusion and proposes a new infrared-visible image fusion method based on the convolutional operation analysis operator. The proposed framework and 11 representative methods are applied to 8 common infrared/visible image sets to verify performance in experiments. The average values of metrics QABF, QE, and QP on 8 examples are 0.6071, 0.4011, and 0.4137 (for the ”city” training set) and 0.6065, 0.4036, 0.4131 (for the ”fruit” training set), respectively. Compared with deep learning, the proposed framework in this paper improves the performance of metrics (QABF, QE and QP) by 15.13%, 22.79%, 13.37% (for the ’city’ training set) and 15.02%, 23.37%, 13.25% (for the ’fruit’ training set), respectively. The experimental results show that the proposed method surpasses the state-of-the-art in terms of visual quality and objective assessment.

References

  1. [1] Hu, Peng and Yang, Fengbao and Ji, Linna and Li, Zhijian and Wei, Hong, ”An efficient fusion algorithm based on hybrid multiscale decomposition for infrared-visible and multi-type images”,Infrared Physics & Technology,112(2),2020.Google ScholarGoogle Scholar
  2. [2] Y. Yang, Y. Zhang, S. Huang, Y. Zuo and J. Sun, ”Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model.” IEEE Transactions on Instrumentation and Measurement.,70,1–15,2021.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Liu,Yu and Liu,Shuping and Wang,Zengfu, ”A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion,vol. 24, pp. 147-164, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Yin, Haitao and Li, Shutao, ”Multimodal image fusion with joint sparsity model,” Optical Engineering, vol. 50, 2011.Google ScholarGoogle Scholar
  5. [5] Yang, Bin and Li, Shutao, ”Visual attention guided image fusion with sparse representation,” Optik - International Journal for Light and Electron Optics, vol. 125, pp.4881-4888,2014.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Gao, Zhisheng and Zhang, Chengfang, ”Texture clear multi-modal image fusion with joint sparsity model,” Optik - International Journal for Light and Electron Optics, vol. 121305, pp.S0030402616310725,2016.Google ScholarGoogle Scholar
  7. [7] ZHANG C F,Yi L Z, ”Multimodal image fusion with adaptive joint sparsity model,” J. Electron. Imaging, vol. 28, pp.013043,2019.Google ScholarGoogle Scholar
  8. [8] Chengfang Zhang, Ziliang Feng, Zhisheng Gao, Xin Jin, Dan Yan, Liangzhong Yi, ”Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning,” Opt. Eng, vol. 59, pp.051402,2020.Google ScholarGoogle Scholar
  9. [9] Liu, C. H. and Qi, Y. and Ding, W. R., ”Infrared and visible image fusion method based on saliency detection in sparse domain”, Infrared Physics & Technology, 94-102, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] B. Wohlberg, ”Efficient algorithms for convolutional sparse representations,” IEEE Trans. Image Process,25, pp.301–315,2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Garcia-Cardona, Cristina and Wohlberg, Brendt, ”Convolutional Dictionary Learning: A Comparative Review and New Algorithms,” IEEE Transactions on Computational Imaging,vol. 4,no.3, pp.366–381,2018.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Y. Liu, X. Chen, and R. K. Ward, ”Image fusion with convolutional sparse representation,”IEEE Signal Process. Lett.vol. 23, pp.1882–1886,2016.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Zhang, Chengfang and Yue, Zhen and Yi, Liangzhong and Jin, Xin and Yang, Xingchun,”Infrared and Visible Image Fusion using NSCT and Convolutional Sparse Representation,” Image and Graphics,pp.393-405,2019.Google ScholarGoogle Scholar
  14. [14] Zhang, Chengfang and Yue, Zhen and Yan, Dan and Yang, Xingchun,” Infrared and visible image fusion using joint convolution sparse coding,”The Second International Conference on Image, Video Processing and Artificial Intelligence,2019.Google ScholarGoogle Scholar
  15. [15] Zhang, Chengfang and Yan, Dan and Yi, Liangzhong and Pei, Zheng,” Visible and infrared image fusion based on convolutional sparse coding with gradient regularization,”2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE),2019.Google ScholarGoogle Scholar
  16. [16] Li H, Wu X J, Kittler J,”Infrared and Visible Image Fusion using a Deep Learning Framework,”Pattern Recognition (ICPR), 2018 24rd International Conference on. IEEE,2705 - 2710,2018.Google ScholarGoogle Scholar
  17. [17] Il Yong Chun and Jeffrey A. Fessler, ”Convolutional analysis operator learning: Acceleration and convergence,”IEEE Trans. Image Process., 29(1): 2108-2122, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Zhiqiang Zhou et al. ”Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters”, Information Fusion, 30, 15-26, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Jiayi Ma, Chen Chen, Chang Li, and Jun Huang. ”Infrared and visible image fusion via gradient transfer and total variation minimization”, Information Fusion, 31, pp. 100-109, Sept. 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Naidu, V. P. S.”Image Fusion technique using Multi-resolution singular Value decomposition,”Defence Science Journal,61(5),479-484,2011.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Ma, J., Zhou, Z., Wang, B., Zong, H.”Infrared and visible image fusion based on visual saliency map and weighted least square optimization.” Infrared Physics & Technology, 82, 8-17,2017.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Hui Li, Xiao-Jun Wu. ”Infrared and visible image fusion using Latent Low-Rank Representation.”(https://arxiv.org/abs/1804.08992),2018.Google ScholarGoogle Scholar
  23. [23] F. Heide, W. Heidrich, and G. Wetzstein, ”Fast and flexible convolutional sparse coding,” in Proc. IEEE CVPR, Boston, MA, Jun. 2015, pp.5135–5143.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, ”Deconvolutional networks,”in Proc. IEEE CVPR, San Francisco, CA, Jun. 2010,pp. 2528–2535.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] https://figshare.com/articles/TN_Image_Fusion_Dataset/1008029Google ScholarGoogle Scholar
  26. [26] https://www.researchgate.net/publication/304246314Google ScholarGoogle Scholar
  27. [27] Z. Liu et al., ”Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study,” IEEE Trans. Pattern Anal. Machine Intell.34(1), 94–109 (2011). pp.5135–5143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Xydeas, C. S., and P. V. V.. ”Objective image fusion performance measure.” Military Technical Courier 56.4(2000):181-193.Google ScholarGoogle Scholar
  29. [29] Piella, Gemma, and H. Heijmans. ”A new quality metric for image fusion.” International Conference on Image Processing IEEE, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Zhao, Jiying, R. Laganiere, and Z. Liu. ”Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement.” International Journal of Innovative Computing Information Control Ijicic 3.6(2006).Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Liu Y, Chen X, Cheng J, et al. ”Infrared and visible image fusion with convolutional neural networks”. International Journal of Wavelets, Multiresolution and Information Processing, 16(3),2018.Google ScholarGoogle Scholar
  32. [32] Jiayi, Ma and Wei, Yu and Pengwei, Liang and Chang, Li and Junjun, Jiang,”FusionGAN: A generative adversarial network for infrared and visible image fusion”,Information Fusion,2019.Google ScholarGoogle Scholar
  33. [33] Yong Yang and Jiaxiang Liu and Shuying Huang and Weiguo Wan and Juwei Guan,”Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network”,IEEE Transactions on Circuits and Systems for Video Technology,PP(99),1-1,2021.Google ScholarGoogle Scholar
  34. [34] Li J, Huo H T, Liu K, et al. ”Infrared and Visible Image Fusion Using Dual Discriminators Generative Adversarial Networks with Wasserstein Distance”. Information Sciences, 2020.Google ScholarGoogle Scholar
  35. [35] Li L, Xia Z, Han H, et al. ”Infrared and visible image fusion using a shallow CNN and structural similarity constraint”. IET Image Processing, 14(3), 2020.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICITEE '21: Proceedings of the 4th International Conference on Information Technologies and Electrical Engineering
    October 2021
    477 pages
    ISBN:9781450386494
    DOI:10.1145/3513142

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)7
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format