Abstract
In this paper, we propose a novel end-to-end model for multi-focus image fusion based on generative adversarial networks, termed as ACGAN. In our model, due to the different gradient distribution between the corresponding pixels of two source images, an adaptive weight block is proposed in our model to determine whether source pixels are focused or not based on the gradient. Under this guidance, we design a special loss function for forcing the fused image to have the same distribution as the focused regions in source images. In addition, a generator and a discriminator are trained to form a stable adversarial relationship. The generator is trained to generate a real-like fused image, which is expected to fool the discriminator. Correspondingly, the discriminator is trained to distinguish the generated fused image from the ground truth. Finally, the fused image is very close to ground truth in probability distribution. Qualitative and quantitative experiments are conducted on publicly available datasets, and the results demonstrate the superiority of our ACGAN over the state-of-the-art, in terms of both visual effect and objective evaluation metrics.











Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. AEU Int J Electron Commun 69(12):1890–1896
Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78
Deshmukh M, Bhosale U (2010) Image fusion and image quality assessment of fused images. Int J Image Process 4(5):484
Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680
Guo X, Nie R, Cao J, Zhou D, Mei L, He K (2019) Fusegan: learning to fuse multi-focus image via conditional generative adversarial network. IEEE Trans Multimed 21:1982–1996
Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) Multi-focus image fusion for visual sensor networks in DCT domain. Comput Electr Eng 37(5):789–797
Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14(2):127–135
Li H, Chai Y, Li Z (2013) Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection. Optik Int J Light Electron Opt 124(1):40–51
Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14(2):147–162
Li W, Xie Y, Zhou H, Han Y, Zhan K (2018) Structure-aware image fusion. Optik 172:1–11
Liu L, Zhang H, Xu X, Zhang Z, Yan S (2019) Collocating clothes with generative adversarial networks cosupervised by categories and attributes: a multidiscriminator framework. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2944979
Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207
Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense sift. Inf Fusion 23:139–155
Ma B, Ban X, Huang H, Zhu Y (2019) Sesf-fuse: An unsupervised deep model for multi-focus image fusion. arXiv preprint arXiv:1908.01703
Ma J, Jiang X, Jiang J, Zhao J, Guo X (2019) LMR: learning a two-class classifier for mismatch removal. IEEE Trans Image Process 28(8):4045–4059
Ma J, Liang P, Yu W, Chen C, Guo X, Wu J, Jiang J (2020) Infrared and visible image fusion via detail preserving adversarial learning. Inf Fusion 54:85–98
Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178
Ma J, Xu H, Jiang J, Mei X, Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995
Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11–26
Ma J, Zhao J, Jiang J, Zhou H, Guo X (2019) Locality preserving matching. Int J Comput Vis 127(5):512–531
Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802
Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fus 25:72–84
Qiu X, Li M, Zhang L, Yuan X (2019) Guided filter-based multi-focus image fusion through focus region detection. Signal Process Image Commun 72:35–46
Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):023522
Xu H, Liang P, Yu W, Jiang J, Ma J (2019) Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators. In: Proceedings of twenty-eighth international joint conference on artificial intelligence (IJCAI-19), pp 3954–3960
Xu H, Ma J, Le Z, Jiang J, Guo X (2020) Fusiondn: A unified densely connected network for image fusion. In: Proceedings of the thirty-fourth AAAI conference on artificial intelligence
Yang B, Li S (2009) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892
Yang L, Guo B, Ni W (2007) Multifocus image fusion algorithm based on contourlet decomposition and region statistics. In: Fourth international conference on image and graphics (ICIG 2007), pp 707–712. IEEE
Yang Y, Huang S, Gao J, Qian Z (2014) Multi-focus image fusion using an effective discrete wavelet transform based algorithm. Meas Sci Rev 14(2):102–108
Zhang H, Sun Y, Liu L, Wang X, Li L, Liu W (2018) Clothingout: a category-supervised GAN model for clothing segmentation and retrieval. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3691-y
Zhang H, Xu H, Xiao Y, Guo X, Ma J (2020) Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the thirty-fourth AAAI conference on artificial intelligence
Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review. Inf Fus 40:57–75
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant No. 61903279.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Huang, J., Le, Z., Ma, Y. et al. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput & Applic 32, 15119–15129 (2020). https://doi.org/10.1007/s00521-020-04863-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-020-04863-1