ABSTRACT
As a synthetic model, conventional convolutional sparse representation (CSR)/convolutional dictionary learning (CDL) suffers from model mismatch. This deficiency leads to loss of details and smoothing in infrared-visible fusion results with CDL/CSR methods. Based on study of analytic operator learning, Chun introduces the idea of convolution into an ”analytic” signal model and proposes the convolutional analytic operator learning (CAOL) framework. CAOL uses the convergent block proximal extrapolation gradient method (BPEG-M) with majorizer to solve the model mismatch problem. To avoid the shortcomings of CDL/CSR-based fusion methods, this paper introduces the CAOL idea to image fusion and proposes a new infrared-visible image fusion method based on the convolutional operation analysis operator. The proposed framework and 11 representative methods are applied to 8 common infrared/visible image sets to verify performance in experiments. The average values of metrics QABF, QE, and QP on 8 examples are 0.6071, 0.4011, and 0.4137 (for the ”city” training set) and 0.6065, 0.4036, 0.4131 (for the ”fruit” training set), respectively. Compared with deep learning, the proposed framework in this paper improves the performance of metrics (QABF, QE and QP) by 15.13%, 22.79%, 13.37% (for the ’city’ training set) and 15.02%, 23.37%, 13.25% (for the ’fruit’ training set), respectively. The experimental results show that the proposed method surpasses the state-of-the-art in terms of visual quality and objective assessment.
- [1] Hu, Peng and Yang, Fengbao and Ji, Linna and Li, Zhijian and Wei, Hong, ”An efficient fusion algorithm based on hybrid multiscale decomposition for infrared-visible and multi-type images”,Infrared Physics & Technology,112(2),2020.Google Scholar
- [2] Y. Yang, Y. Zhang, S. Huang, Y. Zuo and J. Sun, ”Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model.” IEEE Transactions on Instrumentation and Measurement.,70,1–15,2021.Google ScholarCross Ref
- [3] Liu,Yu and Liu,Shuping and Wang,Zengfu, ”A general framework for image fusion based on multi-scale transform and sparse representation,” Information Fusion,vol. 24, pp. 147-164, 2015.Google ScholarDigital Library
- [4] Yin, Haitao and Li, Shutao, ”Multimodal image fusion with joint sparsity model,” Optical Engineering, vol. 50, 2011.Google Scholar
- [5] Yang, Bin and Li, Shutao, ”Visual attention guided image fusion with sparse representation,” Optik - International Journal for Light and Electron Optics, vol. 125, pp.4881-4888,2014.Google ScholarCross Ref
- [6] Gao, Zhisheng and Zhang, Chengfang, ”Texture clear multi-modal image fusion with joint sparsity model,” Optik - International Journal for Light and Electron Optics, vol. 121305, pp.S0030402616310725,2016.Google Scholar
- [7] ZHANG C F,Yi L Z, ”Multimodal image fusion with adaptive joint sparsity model,” J. Electron. Imaging, vol. 28, pp.013043,2019.Google Scholar
- [8] Chengfang Zhang, Ziliang Feng, Zhisheng Gao, Xin Jin, Dan Yan, Liangzhong Yi, ”Salient feature multimodal image fusion with a joint sparse model and multiscale dictionary learning,” Opt. Eng, vol. 59, pp.051402,2020.Google Scholar
- [9] Liu, C. H. and Qi, Y. and Ding, W. R., ”Infrared and visible image fusion method based on saliency detection in sparse domain”, Infrared Physics & Technology, 94-102, 2017.Google ScholarCross Ref
- [10] B. Wohlberg, ”Efficient algorithms for convolutional sparse representations,” IEEE Trans. Image Process,25, pp.301–315,2016.Google ScholarDigital Library
- [11] Garcia-Cardona, Cristina and Wohlberg, Brendt, ”Convolutional Dictionary Learning: A Comparative Review and New Algorithms,” IEEE Transactions on Computational Imaging,vol. 4,no.3, pp.366–381,2018.Google ScholarCross Ref
- [12] Y. Liu, X. Chen, and R. K. Ward, ”Image fusion with convolutional sparse representation,”IEEE Signal Process. Lett.vol. 23, pp.1882–1886,2016.Google ScholarCross Ref
- [13] Zhang, Chengfang and Yue, Zhen and Yi, Liangzhong and Jin, Xin and Yang, Xingchun,”Infrared and Visible Image Fusion using NSCT and Convolutional Sparse Representation,” Image and Graphics,pp.393-405,2019.Google Scholar
- [14] Zhang, Chengfang and Yue, Zhen and Yan, Dan and Yang, Xingchun,” Infrared and visible image fusion using joint convolution sparse coding,”The Second International Conference on Image, Video Processing and Artificial Intelligence,2019.Google Scholar
- [15] Zhang, Chengfang and Yan, Dan and Yi, Liangzhong and Pei, Zheng,” Visible and infrared image fusion based on convolutional sparse coding with gradient regularization,”2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE),2019.Google Scholar
- [16] Li H, Wu X J, Kittler J,”Infrared and Visible Image Fusion using a Deep Learning Framework,”Pattern Recognition (ICPR), 2018 24rd International Conference on. IEEE,2705 - 2710,2018.Google Scholar
- [17] Il Yong Chun and Jeffrey A. Fessler, ”Convolutional analysis operator learning: Acceleration and convergence,”IEEE Trans. Image Process., 29(1): 2108-2122, 2020.Google ScholarCross Ref
- [18] Zhiqiang Zhou et al. ”Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters”, Information Fusion, 30, 15-26, 2016.Google ScholarDigital Library
- [19] Jiayi Ma, Chen Chen, Chang Li, and Jun Huang. ”Infrared and visible image fusion via gradient transfer and total variation minimization”, Information Fusion, 31, pp. 100-109, Sept. 2016.Google ScholarDigital Library
- [20] Naidu, V. P. S.”Image Fusion technique using Multi-resolution singular Value decomposition,”Defence Science Journal,61(5),479-484,2011.Google ScholarCross Ref
- [21] Ma, J., Zhou, Z., Wang, B., Zong, H.”Infrared and visible image fusion based on visual saliency map and weighted least square optimization.” Infrared Physics & Technology, 82, 8-17,2017.Google ScholarCross Ref
- [22] Hui Li, Xiao-Jun Wu. ”Infrared and visible image fusion using Latent Low-Rank Representation.”(https://arxiv.org/abs/1804.08992),2018.Google Scholar
- [23] F. Heide, W. Heidrich, and G. Wetzstein, ”Fast and flexible convolutional sparse coding,” in Proc. IEEE CVPR, Boston, MA, Jun. 2015, pp.5135–5143.Google ScholarCross Ref
- [24] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, ”Deconvolutional networks,”in Proc. IEEE CVPR, San Francisco, CA, Jun. 2010,pp. 2528–2535.Google ScholarCross Ref
- [25] https://figshare.com/articles/TN_Image_Fusion_Dataset/1008029Google Scholar
- [26] https://www.researchgate.net/publication/304246314Google Scholar
- [27] Z. Liu et al., ”Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study,” IEEE Trans. Pattern Anal. Machine Intell.34(1), 94–109 (2011). pp.5135–5143.Google ScholarDigital Library
- [28] Xydeas, C. S., and P. V. V.. ”Objective image fusion performance measure.” Military Technical Courier 56.4(2000):181-193.Google Scholar
- [29] Piella, Gemma, and H. Heijmans. ”A new quality metric for image fusion.” International Conference on Image Processing IEEE, 2003.Google ScholarCross Ref
- [30] Zhao, Jiying, R. Laganiere, and Z. Liu. ”Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement.” International Journal of Innovative Computing Information Control Ijicic 3.6(2006).Google ScholarDigital Library
- [31] Liu Y, Chen X, Cheng J, et al. ”Infrared and visible image fusion with convolutional neural networks”. International Journal of Wavelets, Multiresolution and Information Processing, 16(3),2018.Google Scholar
- [32] Jiayi, Ma and Wei, Yu and Pengwei, Liang and Chang, Li and Junjun, Jiang,”FusionGAN: A generative adversarial network for infrared and visible image fusion”,Information Fusion,2019.Google Scholar
- [33] Yong Yang and Jiaxiang Liu and Shuying Huang and Weiguo Wan and Juwei Guan,”Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network”,IEEE Transactions on Circuits and Systems for Video Technology,PP(99),1-1,2021.Google Scholar
- [34] Li J, Huo H T, Liu K, et al. ”Infrared and Visible Image Fusion Using Dual Discriminators Generative Adversarial Networks with Wasserstein Distance”. Information Sciences, 2020.Google Scholar
- [35] Li L, Xia Z, Han H, et al. ”Infrared and visible image fusion using a shallow CNN and structural similarity constraint”. IET Image Processing, 14(3), 2020.Google Scholar
Recommendations
Medical Image Fusion Using a Convolution Analysis Operator
ICITEE '21: Proceedings of the 4th International Conference on Information Technologies and Electrical EngineeringIn contrast to ”synthetic” models such as convolutional sparse representation(CSR)/convolutional dictionary learning(CDL), convolutional analytic operator learning(CAOL) based on convergent block proximal extrapolation gradient with majorizer(BPEG-M), ...
Infrared and Visible Image Fusion Using NSCT and Convolutional Sparse Representation
Image and GraphicsAbstractIn this paper, a new infrared and visible image fusion method based on non-subsampled contourlet transform (NSCT) and convolutional sparse representation (CSR) is proposed to overcome defects in selecting the NSCT decomposition level, detail blur ...
Convolutional analysis operator learning for multifocus image fusion
AbstractSparse representation (SR), convolutional sparse representation (CSR) and convolutional dictionary learning (CDL) are synthetic-based priors that have proven to be successful in signal inverse problems (such as multifocus image fusion)...
Highlights- A new multi-focus image fusion framework CAOL-based is proposed.
- The different ...
Comments