skip to main content
10.1145/3511176.3511188acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvipConference Proceedingsconference-collections
research-article

A multi-focus image fusion method based on nested U-Net

Authors Info & Claims
Published:12 March 2022Publication History

ABSTRACT

Multi-focus image fusion is a popular research direction of image fusion, however, because of the complexity of the image, it has always been difficult in scientific research to accurately judge the clear area, especially in the clear and fuzzy edge of the complex environment. To better determine the focus area of the source image and obtain a clear image, the improved U2-Net model is used to analyze the focus area, and the multi-scale feature extraction scheme is used to generate the decision map. At the same time, the algorithm uses the NYU-D2 depth image as the training dataset in this paper. To achieve a better training effect, the method of image segmentation, Graph Cut, is combined with manual adjustment to make the training dataset. The experimental results show that compared with several existing latest algorithms, this fusion method can obtain accurate decision diagrams and has better performance in visual perception and objective evaluation.

References

  1. Pajares G , Cruz J .A wavelet-based image fusion tutorial[J].Pattern Recognition,2004,37(9):1855-1872.Google ScholarGoogle Scholar
  2. Shuping, Wang, Zheng.Multi-focus image fusion with dense SIFT.Google ScholarGoogle Scholar
  3. Li S, Kang X, Hu J . Image Fusion With Guided Filtering[J].IEEE Transactions on Image Processing,2013,22(7):2864-2875.Google ScholarGoogle Scholar
  4. Li H , Yi C , Yin H ,et al.Multifocus image fusion and denoising scheme based on homogeneity similarity[J].Optics Communications,2012,285(2):91-100.Google ScholarGoogle Scholar
  5. Hui L, Manjunath BS, Mitra S K . Multi-sensor image fusion using the wavelet transform[J].Graphical Models and Image Processing,2002,57(3):235-245.Google ScholarGoogle Scholar
  6. Wang W, F Chang. A Multi-focus Image Fusion Method Based on Laplacian Pyramid[J].Journal of Computers,2011,6(12):2559-2566.Google ScholarGoogle Scholar
  7. Bhatnagar G, Wu Q, Zheng L . Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain[J].IEEE Transactions on Multimedia,2014,9(5):1014-1024.Google ScholarGoogle Scholar
  8. Yang Y, Tong S, Huang S,et al.Multifocus Image Fusion Based on NSCT and Focused Area Detection[J].IEEE Sensors Journal,2014,15(5):1-1.Google ScholarGoogle Scholar
  9. Zhou Z , Li S , Wang B .Multi-scale weighted gradient-based fusion for multi-focus images[J].Information Fusion,2014,20:60-72.Google ScholarGoogle Scholar
  10. Kong W, Zhang L, Lei Y . Novel fusion method for visible light and infrared images based on NSST-SF-PCNN[J].Infrared Physics & Technology,2014,65:103-112.Google ScholarGoogle Scholar
  11. Gao G , Xu L , D Feng.Multi-focus image fusion based on non-subsampled shearlet transform[J].IET Image Processing,2013,7(6):633-639.Google ScholarGoogle Scholar
  12. Li S , Kang X , Hu J ,et al.Image matting for fusion of multi-focus images in dynamic scenes[J].Information Fusion,2013,14(2):147-162.Google ScholarGoogle Scholar
  13. Yu Z , Bai X , Tao W .Boundary Finding Based Multi-focus Image Fusion through Multi-scale Morphological Focus-measure[J].Information Fusion,2017,35:81-101.Google ScholarGoogle Scholar
  14. Bai X , Zhang Y , Zhou F ,et al.Quadtree-based multi-focus image fusion using a weighted focus-measure[J].Information Fusion,2015,22:105-118.Google ScholarGoogle Scholar
  15. Liu Y , Chen X , Peng H ,et al.Multi-focus image fusion with a deep convolutional neural network[J].Information Fusion,2017,36:191-207.Google ScholarGoogle Scholar
  16. Ma J, Wei Y, Liang P,et al.FusionGAN: A generative adversarial network for infrared and visible image fusion[J].Information Fusion,2019,48:11-26.Google ScholarGoogle Scholar
  17. Jm A, Pl A, Wei Y A,et al.Infrared and visible image fusion via detail preserving adversarial learning[J].Information Fusion,2020,54:85-98.Google ScholarGoogle Scholar
  18. Yu Z A, Yu LB , Peng S C,et al.IFCNN: A general image fusion framework based on the convolutional neural network[J].Information Fusion,2020,54:99-118.Google ScholarGoogle Scholar
  19. https://cs.nyu.edu/∼silberman/datasets/nyu_depth_v2.htmlGoogle ScholarGoogle Scholar
  20. Rother C .GrabCut : Interactive foreground extraction using iterated graph cut[J].Acm Trans Graph,2004,23.Google ScholarGoogle Scholar
  21. Luo P, Ren J, Peng Z,et al.Differentiable Learning-to-Normalize via Switchable Normalization[J]. 2018.Google ScholarGoogle Scholar
  22. Newell A, Yang K, Jia D . Stacked Hourglass Networks for Human Pose Estimation[C]// European Conference on Computer Vision.Springer International Publishing,2016.Google ScholarGoogle Scholar
  23. G.Qu,D.Zhang,and P.Yan,“Information measure for performance of image fusion,” Electronics Letters,vol.38,no.7,pp.313–315,2002.Google ScholarGoogle ScholarCross RefCross Ref
  24. C.Yang,J.Q.Zhang,X.R.Wang,and X.Liu,“A novel similarity based quality metric for image fusion,” Information Fusion,vol.9,no.2,pp.156–160,2008.Google ScholarGoogle ScholarCross RefCross Ref
  25. Z.Wang,A.C.Bovik,H.R.Sheikh,E.P.Simoncelli ,“Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing,vol.13,no.4,pp.600–612,2004.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Z.Liu,E.Blasch,Z.Xue,et al.,Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study[J],IEEE Trans.Pattern Anal.Mach.Intell.34 (1) (2011) 94–109.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICVIP '21: Proceedings of the 2021 5th International Conference on Video and Image Processing
    December 2021
    219 pages
    ISBN:9781450385893
    DOI:10.1145/3511176

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 12 March 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format