skip to main content
10.1145/3582649.3582651acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicigpConference Proceedingsconference-collections
research-article

D2BEGAN: A Dual-Discriminator Boundary Equilibrium Generative Adversarial Network for Infrared and Visible Image Fusion

Published: 07 April 2023 Publication History

Abstract

In recent years, deep learning has been widely used in image fusion. Since there is no ground truth, Generative Adversarial Network (GAN) has advantages in this field. Based on GAN, Boundary Equilibrium Generative Adversarial Network (BEGAN) designs a loss evaluation function that only needs to be paired with a simple model design so that the discriminator and generator can be kept in balance and, eventually, excellent images can be generated. When thinking about the field of image fusion, it becomes logical to employ a dual-discriminator to instruct a generator to make the fusion result contain both the thermal radiation information of the infrared image and the texture information of the visible image. Based on the above considerations, we propose a BEGAN-based dual discriminator network model, the so-called D2BEGAN. In this network, the generator uses a dense block to enhance the extraction of information from the source images. After training, generator can achieve end-to-end image fusion. To verify the model's effectiveness, we also performed tests on publicly available datasets, demonstrating that our fusion method could obtain relatively natural fused images while achieving the best metrics compared to many state-of-the-art models.

References

[1]
T Alexander. 2014. TNO Image Fusion Dataset. (2014).
[2]
D P Bavirisetti and R Dhuli. 2015. Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform. IEEE Sens. J. 16, 1 (2015), 203–209.
[3]
Durga Prasad Bavirisetti, Gang Xiao, and Gang Liu. 2017. Multi-sensor image fusion based on fourth order partial differential equations. 20th Int. Conf. Inf. Fusion, Fusion 2017 - Proc. (2017).
[4]
D Berthelot, T Schumm, and L Metz. 2017. BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv (2017).
[5]
Yu Fu, Xiao Jun Wu, and Tariq Durrani. 2021. Image fusion based on generative adversarial network consistent with perception. Inf. Fusion 72, (August 2021), 110–125.
[6]
Bks Kumar. 2015. Image fusion based on pixel significance using cross bilateral filter. Signal,Image&Video Process. (2015).
[7]
Hui Li, Xiao Jun Wu, and Josef Kittler. 2021. RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf. Fusion 73, February (2021), 72–86.
[8]
Jing Li, Hongtao Huo, Kejian Liu, and Chang Li. 2020. Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance. Inf. Sci. (Ny). 529, Dl (2020), 28–41.
[9]
Yu Liu, Shuping Liu, and Zengfu Wang. 2015. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, (2015), 147–164.
[10]
Jiayi Ma, Chen Chen, Chang Li, and Jun Huang. 2016. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 31, (2016), 100–109.
[11]
Jiayi Ma, Han Xu, Junjun Jiang, Xiaoguang Mei, and Xiao Ping Zhang. 2020. DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Trans. Image Process. 29, iii (2020), 4980–4995.
[12]
Jiayi Ma, Wei Yu, Pengwei Liang, Chang Li, and Junjun Jiang. 2019. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 48, (2019), 11–26.
[13]
V P S Naidu. 2011. Image Fusion Technique using Multi-resolution Singular Value Decomposition. Def. Sci. J. 61, 5 (2011), 479–484.
[14]
[Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 4 (2004), 600–612.
[15]
Wenxia Yin, Kangjian He, Dan Xu, Yueying Luo, and Jian Gong. 2022. Significant target analysis and detail preserving based infrared and visible image fusion. Infrared Phys. Technol. (2022), 104041.
[16]
Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang, and Pengfei Li. 2020. DIDFuse: Deep image decomposition for infrared and visible image fusion. IJCAI Int. Jt. Conf. Artif. Intell. 2021-Janua, (2020), 970–976.

Index Terms

  1. D2BEGAN: A Dual-Discriminator Boundary Equilibrium Generative Adversarial Network for Infrared and Visible Image Fusion

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIGP '23: Proceedings of the 2023 6th International Conference on Image and Graphics Processing
    January 2023
    246 pages
    ISBN:9781450398572
    DOI:10.1145/3582649
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 April 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Boundary Equilibrium Generative Adversarial Network
    2. Dense block
    3. Image fusion
    4. Infrared image
    5. Visible image

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICIGP 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 40
      Total Downloads
    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 17 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media