Abstract
Acquiring all-in-focus images is significant in the multi-media era. Limited by the depth-of-field of the optical lens, it is hard to acquire an image with all targets are clear. One possible solution is to merge the information of a few complementary images in the same scene. In this article, we employ a two-channel convolutional network to derive the clarity map of source images. Then, the clarity map is smoothed by using morphological filtering. Finally, the fusion image is constructed via merging the clear parts of source images. Experimental results prove that our approach has a better performance on both visual quality and quantitative evaluations than many previous fusion approaches.
Similar content being viewed by others
References
Burt PJ, Adelson EH (1987) The laplacian pyramid as a compact image code. Readings in Computer Vision 31(4):671–679
Cacciola M, Morabito FC, Simone G (2008) Image Fusion: Algorithms and Applications
De I, Chanda B (2013) Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure
Eltoukhy HA, Kavusi S (2003) Computationally efficient algorithm for multifocus image reconstruction. In: Electronic Imaging
Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features
Hamza AB, Yun H, Krim H, Willsky A (2005) A multiscale approach to pixel-level image fusion. Integr Comput Aid Eng 12(2):135–146
He K, Zhang X, Ren S, Jian S (2015) Delving deep into rectifiers: Surpassing human-level performance on imagenet classification
Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning
Kang X, Li S, Benediktsson JA (2014) Pansharpening with matting model. IEEE Transactions on Geoscience & Remote Sensing 52(8):5088–5099
Krizhevsky A, Sutskever I, E Hinton G (2012) Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems 25
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Li S, Kwok JT, Wang Y (2001) Combination of images with diverse focuses using the spatial frequency. Inform Fusion 2(3):169–176
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inform Fusion 33:100–112
Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inform Fusion 14(2):147–162
Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164
Min L, Qiang C, Yan S (2013) Network in network. Computer Science
Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: International Conference on International Conference on Machine Learning
Petrović VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237
Prabhakar KR, Srikar VS, Babu RV (2017) Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs
Qiang W, Yi S (2004) Performances evaluation of image fusion techniques based on nonlinear correlation measurement. In: IEEE Instrumentation & Measurement Technology Conference
Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Shahdoosti H (2016) Combining the spectral pca and spatial pca fusion methods by an optimal filter. Inform Fusion 27:150–160
Wu S, Wu W, Yang X, Lu L, Liu K, Jeon G (2019) Multifocus image fusion using random forest and hidden markov model. Soft Computing
Xiong L, Xu Z, Shi YQ (2017) An integer wavelet transform based scheme for reversible data hiding in encrypted images. Multidimensional Systems & Signal Processing, pp 1–12
Yuan C, Li S, Hu J (2011) Multi-focus image fusion by nonsubsampled shearlet transform. In: Sixth International Conference on Image & Graphics
Xydeas CS, Petrovic VS (2000) Objective pixel-level image fusion performance measure. Proceedings of SPIE - The International Society for Optical Engineering 4051:89–98
Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE T Instrumentation and Measurement 59:884–892
Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumentation & Measurement 59(4):884–892
Yang C, Zhang JQ, Wang XR, Liu X (2008) A novel similarity based quality metric for image fusion. Inform Fusion 9(2):156–160
Yu L, Xun C, Hu P, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191–207
Zagoruyko S, Komodakis N (2015) Learning to compare image patches via convolutional neural networks
Zhang Q, Guo BL (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346
Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to dwt based fusion algorithms. Inform Fusion 8(2):177–192
Acknowledgments
The research in our paper is sponsored by National Natural Science Foundation of China (No. 61711540303, No. 61701327), Science Foundation of Sichuan Science and Technology Department (No. 2018GZ0178). Open research fund of State Key Laboratory (No. 614250304010517).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Wen, Y., Yang, X., Celik, T. et al. Multifocus image fusion using convolutional neural network. Multimed Tools Appl 79, 34531–34543 (2020). https://doi.org/10.1007/s11042-020-08945-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-08945-z