Skip to main content
Log in

A generative adversarial network with adaptive constraints for multi-focus image fusion

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, we propose a novel end-to-end model for multi-focus image fusion based on generative adversarial networks, termed as ACGAN. In our model, due to the different gradient distribution between the corresponding pixels of two source images, an adaptive weight block is proposed in our model to determine whether source pixels are focused or not based on the gradient. Under this guidance, we design a special loss function for forcing the fused image to have the same distribution as the focused regions in source images. In addition, a generator and a discriminator are trained to form a stable adversarial relationship. The generator is trained to generate a real-like fused image, which is expected to fool the discriminator. Correspondingly, the discriminator is trained to distinguish the generated fused image from the ground truth. Finally, the fused image is very close to ground truth in probability distribution. Qualitative and quantitative experiments are conducted on publicly available datasets, and the results demonstrate the superiority of our ACGAN over the state-of-the-art, in terms of both visual effect and objective evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://sites.google.com/view/durgaprasadbavirisetti/datasets.

  2. https://www.mathworks.com/matlabcentral/fileexchange/45992-standard-images-for-multifocus-image-fusion.

References

  1. Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. AEU Int J Electron Commun 69(12):1890–1896

    Article  Google Scholar 

  2. Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78

    Article  Google Scholar 

  3. Deshmukh M, Bhosale U (2010) Image fusion and image quality assessment of fused images. Int J Image Process 4(5):484

    Google Scholar 

  4. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761

    Article  Google Scholar 

  5. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

  6. Guo X, Nie R, Cao J, Zhou D, Mei L, He K (2019) Fusegan: learning to fuse multi-focus image via conditional generative adversarial network. IEEE Trans Multimed 21:1982–1996

    Article  Google Scholar 

  7. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) Multi-focus image fusion for visual sensor networks in DCT domain. Comput Electr Eng 37(5):789–797

    Article  Google Scholar 

  8. Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14(2):127–135

    Article  Google Scholar 

  9. Li H, Chai Y, Li Z (2013) Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection. Optik Int J Light Electron Opt 124(1):40–51

    Article  Google Scholar 

  10. Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14(2):147–162

    Article  Google Scholar 

  11. Li W, Xie Y, Zhou H, Han Y, Zhan K (2018) Structure-aware image fusion. Optik 172:1–11

    Article  Google Scholar 

  12. Liu L, Zhang H, Xu X, Zhang Z, Yan S (2019) Collocating clothes with generative adversarial networks cosupervised by categories and attributes: a multidiscriminator framework. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2944979

    Article  Google Scholar 

  13. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Article  Google Scholar 

  14. Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense sift. Inf Fusion 23:139–155

    Article  Google Scholar 

  15. Ma B, Ban X, Huang H, Zhu Y (2019) Sesf-fuse: An unsupervised deep model for multi-focus image fusion. arXiv preprint arXiv:1908.01703

  16. Ma J, Jiang X, Jiang J, Zhao J, Guo X (2019) LMR: learning a two-class classifier for mismatch removal. IEEE Trans Image Process 28(8):4045–4059

    Article  MathSciNet  Google Scholar 

  17. Ma J, Liang P, Yu W, Chen C, Guo X, Wu J, Jiang J (2020) Infrared and visible image fusion via detail preserving adversarial learning. Inf Fusion 54:85–98

    Article  Google Scholar 

  18. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178

    Article  Google Scholar 

  19. Ma J, Xu H, Jiang J, Mei X, Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995

    Article  Google Scholar 

  20. Ma J, Yu W, Liang P, Li C, Jiang J (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11–26

    Article  Google Scholar 

  21. Ma J, Zhao J, Jiang J, Zhou H, Guo X (2019) Locality preserving matching. Int J Comput Vis 127(5):512–531

    Article  MathSciNet  Google Scholar 

  22. Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802

  23. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fus 25:72–84

    Article  Google Scholar 

  24. Qiu X, Li M, Zhang L, Yuan X (2019) Guided filter-based multi-focus image fusion through focus region detection. Signal Process Image Commun 72:35–46

    Article  Google Scholar 

  25. Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):023522

    Article  Google Scholar 

  26. Xu H, Liang P, Yu W, Jiang J, Ma J (2019) Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators. In: Proceedings of twenty-eighth international joint conference on artificial intelligence (IJCAI-19), pp 3954–3960

  27. Xu H, Ma J, Le Z, Jiang J, Guo X (2020) Fusiondn: A unified densely connected network for image fusion. In: Proceedings of the thirty-fourth AAAI conference on artificial intelligence

  28. Yang B, Li S (2009) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Article  Google Scholar 

  29. Yang L, Guo B, Ni W (2007) Multifocus image fusion algorithm based on contourlet decomposition and region statistics. In: Fourth international conference on image and graphics (ICIG 2007), pp 707–712. IEEE

  30. Yang Y, Huang S, Gao J, Qian Z (2014) Multi-focus image fusion using an effective discrete wavelet transform based algorithm. Meas Sci Rev 14(2):102–108

    Article  Google Scholar 

  31. Zhang H, Sun Y, Liu L, Wang X, Li L, Liu W (2018) Clothingout: a category-supervised GAN model for clothing segmentation and retrieval. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3691-y

    Article  Google Scholar 

  32. Zhang H, Xu H, Xiao Y, Guo X, Ma J (2020) Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the thirty-fourth AAAI conference on artificial intelligence

  33. Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review. Inf Fus 40:57–75

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 61903279.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fan Fan.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, J., Le, Z., Ma, Y. et al. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput & Applic 32, 15119–15129 (2020). https://doi.org/10.1007/s00521-020-04863-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04863-1

Keywords

Navigation