Skip to main content
Log in

Multifocus image fusion using convolutional neural network

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Acquiring all-in-focus images is significant in the multi-media era. Limited by the depth-of-field of the optical lens, it is hard to acquire an image with all targets are clear. One possible solution is to merge the information of a few complementary images in the same scene. In this article, we employ a two-channel convolutional network to derive the clarity map of source images. Then, the clarity map is smoothed by using morphological filtering. Finally, the fusion image is constructed via merging the clear parts of source images. Experimental results prove that our approach has a better performance on both visual quality and quantitative evaluations than many previous fusion approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Burt PJ, Adelson EH (1987) The laplacian pyramid as a compact image code. Readings in Computer Vision 31(4):671–679

    Google Scholar 

  2. Cacciola M, Morabito FC, Simone G (2008) Image Fusion: Algorithms and Applications

  3. De I, Chanda B (2013) Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure

  4. Eltoukhy HA, Kavusi S (2003) Computationally efficient algorithm for multifocus image reconstruction. In: Electronic Imaging

  5. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features

  6. Hamza AB, Yun H, Krim H, Willsky A (2005) A multiscale approach to pixel-level image fusion. Integr Comput Aid Eng 12(2):135–146

    Article  Google Scholar 

  7. He K, Zhang X, Ren S, Jian S (2015) Delving deep into rectifiers: Surpassing human-level performance on imagenet classification

  8. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning

  9. Kang X, Li S, Benediktsson JA (2014) Pansharpening with matting model. IEEE Transactions on Geoscience & Remote Sensing 52(8):5088–5099

    Article  Google Scholar 

  10. Krizhevsky A, Sutskever I, E Hinton G (2012) Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems 25

  11. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  12. Li S, Kwok JT, Wang Y (2001) Combination of images with diverse focuses using the spatial frequency. Inform Fusion 2(3):169–176

    Article  Google Scholar 

  13. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inform Fusion 33:100–112

    Article  Google Scholar 

  14. Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inform Fusion 14(2):147–162

    Article  Google Scholar 

  15. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164

    Article  Google Scholar 

  16. Min L, Qiang C, Yan S (2013) Network in network. Computer Science

  17. Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: International Conference on International Conference on Machine Learning

  18. Petrović VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237

    Article  Google Scholar 

  19. Prabhakar KR, Srikar VS, Babu RV (2017) Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs

  20. Qiang W, Yi S (2004) Performances evaluation of image fusion techniques based on nonlinear correlation measurement. In: IEEE Instrumentation & Measurement Technology Conference

  21. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315

    Article  Google Scholar 

  22. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  23. Shahdoosti H (2016) Combining the spectral pca and spatial pca fusion methods by an optimal filter. Inform Fusion 27:150–160

    Article  Google Scholar 

  24. Wu S, Wu W, Yang X, Lu L, Liu K, Jeon G (2019) Multifocus image fusion using random forest and hidden markov model. Soft Computing

  25. Xiong L, Xu Z, Shi YQ (2017) An integer wavelet transform based scheme for reversible data hiding in encrypted images. Multidimensional Systems & Signal Processing, pp 1–12

  26. Yuan C, Li S, Hu J (2011) Multi-focus image fusion by nonsubsampled shearlet transform. In: Sixth International Conference on Image & Graphics

  27. Xydeas CS, Petrovic VS (2000) Objective pixel-level image fusion performance measure. Proceedings of SPIE - The International Society for Optical Engineering 4051:89–98

    Google Scholar 

  28. Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE T Instrumentation and Measurement 59:884–892

    Article  Google Scholar 

  29. Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumentation & Measurement 59(4):884–892

    Article  Google Scholar 

  30. Yang C, Zhang JQ, Wang XR, Liu X (2008) A novel similarity based quality metric for image fusion. Inform Fusion 9(2):156–160

    Article  Google Scholar 

  31. Yu L, Xun C, Hu P, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191–207

    Article  Google Scholar 

  32. Zagoruyko S, Komodakis N (2015) Learning to compare image patches via convolutional neural networks

  33. Zhang Q, Guo BL (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346

    Article  Google Scholar 

  34. Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to dwt based fusion algorithms. Inform Fusion 8(2):177–192

    Article  Google Scholar 

Download references

Acknowledgments

The research in our paper is sponsored by National Natural Science Foundation of China (No. 61711540303, No. 61701327), Science Foundation of Sichuan Science and Technology Department (No. 2018GZ0178). Open research fund of State Key Laboratory (No. 614250304010517).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaomin Yang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, Y., Yang, X., Celik, T. et al. Multifocus image fusion using convolutional neural network. Multimed Tools Appl 79, 34531–34543 (2020). https://doi.org/10.1007/s11042-020-08945-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-08945-z

Keywords

Navigation