Abstract
Because the focus information is obtained under different optical depth, it is impossible to collect all relevant information of objects from the only one image. The multifocus image fusion technique enables it to gather all of the focus data from the partially focused images, enhancing contrast and sharpness. To overcome the troubling weakness of the already-existing fusion methods, such as the incomplete boundary information and partial loss of focus, a new network called “BCNN”, combining the layered Bayesian and the convolutional neural network (CNN for short), is constructed. The hierarchical Bayesian can well maintain the texture features and edge information, and change the traditional way of learning a fixed value of the weight by learning the obvious features that are represented by the mean and variance. Meanwhile, the activity levels and the fusion rules can be jointly and deeply learned by the CNN model, avoiding the sophisticated plan and special design for the fusion rules. According to the aforementioned concepts, a novel BCNN-based fusion model for multifocus images is proposed. After detailed experimental implementation, the accuracy and efficacy of the proposed method are extensively illustrated and proved, not only in the way of the numeric evaluation, but also the highlighted visual comparison.
REFERENCES
Liu, Yu., Wang, L., Cheng, J., Li, C., and Chen, X., Multi-focus image fusion: A survey of the state of the art, Inf. Fusion, 2020, vol. 64, pp. 71–91. https://doi.org/10.1016/j.inffus.2020.06.013
Kaur, H., Koundal, D., and Kadyan, V., Image fusion techniques: A survey, Arch. Comput. Methods Eng., 2021, vol. 28, no. 7, pp. 4425–4447. https://doi.org/10.1007/s11831-021-09540-7
Stathaki, T., Image fusion: Algorithms and applications, Sensor Rev., 2009, vol. 29, no. 3. https://doi.org/10.1108/sr.2009.08729cae.001
Fu, J., Li, W., and Du, J., Multimodal medical image fusion via Laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy, Comput. Biol. Med., 2020, vol. 126, p. 104048. https://doi.org/10.1016/j.compbiomed.2020.104048
Sun, L., Li, Yu., Zheng, M., Zhong, Z., and Zhang, Ya., MCnet: Multiscale visible image and infrared image fusion network, Signal Process., 2023, vol. 208, p. 108996. https://doi.org/10.1016/j.sigpro.2023.108996
Chao, Z., Duan, X., Jia, S., Guo, X., Liu, H., and Jia, F., Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network, Appl. Soft Comput., 2022, vol. 118, p. 108542. https://doi.org/10.1016/j.asoc.2022.108542
Bhat, S. and Koundal, D., Multi-focus image fusion using neutrosophic based wavelet transform, Appl. Soft Comput., 2021, vol. 106, p. 107307. https://doi.org/10.1016/j.asoc.2021.107307
Dong, L., Yang, Q., Wu, H., Xiao, H., and Xu, M., High quality multi-spectral and panchromatic image fusion technologies based on Curvelet transform, Neurocomputing, 2015, vol. 159, pp. 268–274. https://doi.org/10.1016/j.neucom.2015.01.050
Li, X., Zhou, F., Tan, H., Chen, Yu., and Zuo, W., Multi-focus image fusion based on nonsubsampled contourlet transform and residual removal, Signal Process., 2021, vol. 184, p. 108062. https://doi.org/10.1016/j.sigpro.2021.108062
Li, B., Peng, H., and Wang, J., A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images, Signal Process., 2021, vol. 178, p. 107793. https://doi.org/10.1016/j.sigpro.2020.107793
Li, X., Wan, W., Zhou, F., Cheng, X., Jie, Yu., and Tan, H., Medical image fusion based on sparse representation and neighbor energy activity, Biomed. Signal Process. Control, 2023, vol. 80, p. 104353. https://doi.org/10.1016/j.bspc.2022.104353
Qu, L., Yin, S., Liu, S., Liu, X., Wang, M., and Song, Z., AIM-MEF: Multi-exposure image fusion based on adaptive information mining in both spatial and frequency domains, Expert Syst. Appl., 2023, vol. 223, p. 119909. https://doi.org/10.1016/j.eswa.2023.119909
Kurban, T., Region based multi-spectral fusion method for remote sensing images using differential search algorithm and IHS transform, Expert Syst. Appl., 2022, vol. 189, p. 116135. https://doi.org/10.1016/j.eswa.2021.116135
Guo, Z., Yu, X., and Du, Q., Infrared and visible image fusion based on saliency and fast guided filtering, Infrared Phys. Technol., 2022, vol. 123, p. 104178. https://doi.org/10.1016/j.infrared.2022.104178
Mansour, N., Samavi, S., and Shirani, Sh., Multi-focus image fusion using dictionary-based sparse representation, Inf. Fusion, 2015, vol. 25, pp. 72–84. https://doi.org/10.1016/j.inffus.2014.10.004
Hayat, N. and Imran, M., Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter, J. Visual Commun. Image Representation, 2019, vol. 62, pp. 295–308. https://doi.org/10.1016/j.jvcir.2019.06.002
Jinju, J., Santhi, N., Ramar, K., and Sathya Bama, B., Spatial frequency discrete wavelet transform image fusion technique for remote sensing applications, Eng. Sci. Technol., Int. J., 2019, vol. 22, no. 3, pp. 715–726. https://doi.org/10.1016/j.jestch.2019.01.004
Liu, Yu., Chen, X., Peng, H., and Wang, Z., Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, 2017, vol. 36, pp. 191–207. https://doi.org/10.1016/j.inffus.2016.12.001
Tang, H., Xiao, B., Li, W., and Wang, G., Pixel convolutional neural network for multi-focus image fusion, Inf. Sci., 2018, vols. 433–434, pp. 125–141. https://doi.org/10.1016/j.ins.2017.12.043
Amin-Naji, M., Aghagolzadeh, A., and Ezoji, M., Ensemble of CNN for multi-focus image fusion, Inf. Fusion, 2019, vol. 51, pp. 201–214. https://doi.org/10.1016/j.inffus.2019.02.003
Zhang, Yu., Liu, Yu., Sun, P., Yan, H., Zhao, X., and Zhang, L., IFCNN: A general image fusion framework based on convolutional neural network, Inf. Fusion, 2020, vol. 54, pp. 99–118. https://doi.org/10.1016/j.inffus.2019.07.011
Gai, D., Shen, X., Chen, H., and Su, P., Multi-focus image fusion method based on two stage of convolutional neural network, Signal Process., 2020, vol. 176, p. 107681. https://doi.org/10.1016/j.sigpro.2020.107681
Yang, Z., Yang, X., Zhang, R., Liu, K., Anisetti, M., and Jeon, G., Gradient-based multi-focus image fusion method using convolution neural network, Comput. Electr. Eng., 2021, vol. 92, no. 4, p. 107174. https://doi.org/10.1016/j.compeleceng.2021.107174
Ma, B., Zhu, Yu., Yin, X., Ban, X., Huang, H., and Mukeshimana, M., SESF-Fuse: An unsupervised deep model for multi-focus image fusion, Neural Comput. Appl., 2021, vol. 33, no. 11, pp. 5793–5804. https://doi.org/10.1007/s00521-020-05358-9
Zhang, H., Le, Z., Shao, Z., Xu, H., and Ma, J., MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion, Inf. Fusion, 2021, vol. 66, pp. 40–53. https://doi.org/10.1016/j.inffus.2020.08.022
Zhao, R., Zhang, T., Luo, X., and Tan, J., DCKN: Multi-focus image fusion via dynamic convolutional kernel network, Signal Process., 2021, vol. 189, p. 108282. https://doi.org/10.1016/j.sigpro.2021.108282
Yang, G., Wu, X., and Zhang, J., A dynamic balanced quadtree for real-time streaming data, Knowl.-Based Syst., 2023, vol. 263, p. 110291. https://doi.org/10.1016/j.knosys.2023.110291
ImageNet. https://image-net.org/. Cited January 11, 2023.
Lytro Multi-Focus Image Dataset. http://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset. Cited January 20, 2023.
Cheng, H., Wu, H., Zheng, J., Qi, K., and Liu, W., A hierarchical self-attention augmented Laplacian pyramid expanding network for change detection in high-resolution remote sensing images, ISPRS J. Photogrammetry Remote Sensing, 2021, vol. 182, pp. 52–66. https://doi.org/10.1016/j.isprsjprs.2021.10.001
Zhou, Z., Li, S., and Wang, B., Multi-scale weighted gradient-based fusion for multi-focus images, Inf. Fusion, 2014, vol. 20, pp. 60–72. https://doi.org/10.1016/j.inffus.2013.11.005
Zhang, Q. and Guo, B.-L., Multifocus image fusion using the nonsubsampled contourlet transform, Signal Process., 2009, vol. 89, no. 7, pp. 1334–1346. https://doi.org/10.1016/j.sigpro.2009.01.012
Borwonwatanadelok, P., Rattanapitak, W., and Udomhunsakul, S., Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement, 2009 Int. Conf. on Electronic Computer Technology, Macau, China, 2009, IEEE, 2009, pp. 77–81. https://doi.org/10.1109/icect.2009.94
Qiu, X., Li, M., Zhang, L., and Yuan, X., Guided filter-based multi-focus image fusion through focus region detection, Signal Process.: Image Commun., 2019, vol. 72, pp. 35–46. https://doi.org/10.1016/j.image.2018.12.004
Liu, Yu., Liu, S., and Wang, Z., Multi-focus image fusion with dense SIFT, Inf. Fusion, 2015, vol. 23, pp. 139–155. https://doi.org/10.1016/j.inffus.2014.05.004
Goyal, S., Singh, V., Rani, A., and Yadav, N., Multimodal image fusion and denoising in NSCT domain using CNN and FOTGV, Biomed. Signal Process. Control, 2022, vol. 71, p. 103214. https://doi.org/10.1016/j.bspc.2021.103214
Zhang, T., Waqas, M., Liu, Z., Tu, S., Halim, Z., Rehman, S.U., Li, Yu., and Han, Z., A fusing framework of shortcut convolutional neural networks, Inf. Sci., 2021, vol. 579, pp. 685–699. https://doi.org/10.1016/j.ins.2021.08.030
Han, Yu., Cai, Yu., Cao, Yi., and Xu, X., A new image fusion performance metric based on visual information fidelity, Inf. Fusion, 2013, vol. 14, no. 2, pp. 127–135. https://doi.org/10.1016/j.inffus.2011.08.002
Chang, Zh., Yang, Sh., Feng, Zh., Gao, Q., Wang, Sh., and Cui, Yu., Semantic-relation transformer for visible and infrared fused image quality assessment, Inf. Fusion, 2023, vol. 95, pp. 454–470. https://doi.org/10.1016/j.inffus.2023.02.021
Aslantas, V. and Bendes, E., A new image quality metric for image fusion: The sum of the correlations of differences, AEU Int. J. Electron. Commun., 2015, vol. 69, no. 12, pp. 1890–1896. https://doi.org/10.1016/j.aeue.2015.09.004
Liu, Z., Forsyth, D.S., and Laganière, R., A feature-based metric for the quantitative evaluation of pixel-level image fusion, Comput. Vision Image Understanding, 2008, vol. 109, no. 1, pp. 56–68. https://doi.org/10.1016/j.cviu.2007.04.003
Ma, X., Wang, Zh., and Hu, S., Multi-focus image fusion based on multi-scale sparse representation, J. Visual Commun. Image Representation, 2021, vol. 81, p. 103328. https://doi.org/10.1016/j.jvcir.2021.103328
Li, H., Zhang, L., Jiang, M., and Li, Yu., Multi-focus image fusion algorithm based on supervised learning for fully convolutional neural network, Pattern Recognit. Lett., 2021, vol. 141, pp. 45–53. https://doi.org/10.1016/j.patrec.2020.11.014
Zhang, H., Le, Zh., Shao, Zh., Xu, H., and Ma, J., MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion, Inf. Fusion, 2021, vol. 66, pp. 40–53. https://doi.org/10.1016/j.inffus.2020.08.022
Funding
This study was supported by: A project ZR2021MF017 supported by Shandong Provincial Natural Science Foundation; A project 2020SNPT0055 supported by SDUT and Zibo City Integration Development Project.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors of this work declare that they have no conflicts of interest.
Additional information
Publisher’s Note.
Allerton Press remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
ChunXiang Liu, Wang, Y., Wang, L. et al. BCNN: An Effective Multifocus Image fusion Method Based on the Hierarchical Bayesian and Convolutional Neural Networks. Aut. Control Comp. Sci. 58, 166–176 (2024). https://doi.org/10.3103/S0146411624700068
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S0146411624700068