Abstract
A new method is proposed for merging two spatially registered images with diverse focus in this paper. It is based on multi-resolution wavelet decomposition, Self-Organizing Feature Map (SOFM) neural networks and evolution strategies (ES). A normalized feature image, which represents the local region clarity difference of the corresponding spatial location of two source images, is extracted by wavelet transform without down-sampling. The feature image is clustered by SOFM learning algorithm and every pixel pair in source images is classified into a certain class which indicates different clarity differences. To each pixel pairs in different classes, we use different fusion factors to merge themrespectively, these fusion factors are determined by evolution strategies to achieve the best fusion performance. Experimental results show that the proposed method outperforms the wavelet transform (WT) method.
Foundation item: Postdoctoral Science Foundation of China (J63104020156) and National Defence Foundation (51431020204DZ0101).
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Van Genderen, J.L., Pohl, C.: Image fusion: Issues, techniques and applications. In: Proceedings EARSeL Workshop on Intelligent Image Fusion, Strasbourg, France, pp. 18–26 (1994)
Hall, D.L.: Mathematical Techniques in Multi-sensor Data Fusion, pp. 20–59. Artech House, Boston (1992)
Burt, P.T., Andelson, E.H.: The Laplacian pyramid as a compact image code. IEEE Trans. on Commum. 31(4), 532–540 (1983)
Burt, P.J., Kolczynski, R.J.: Enhanced image capture though fusion. In: Proc. 4th Internat. Conf. on Computer Vision, Berlin, Germany, pp. 173–182 (1993)
Toet, A., van Ruyven, L.J., Valeton, J.M.: Merging thermal and visual images by a contrast pyramid. Opt. Engrg. 28(7), 789–792
Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor image fusion using the wavelet transform. Graphical Models Image Processing 57(3), 235–245 (1995)
Koren, I., Laine, A., Taylor, F.: Image fusion using steerable dyadic wavelet. In: Proc. Internat. Conf. on Image Processing, Washington, USA, pp. 232–235
Yocky, D.A.: Image merging and data fusion by means of the discrete two-dimensional wavelet transform. J. Opt. Soc. Am. A:Image Sci. Vision 12(9), 1834–1841 (1995)
Zhang, Z., Blum, R.S.: A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc. IEEE 87(8), 1315–1326 (1999)
Mallat, S.G.: A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machina Intelligence 11(7), 674–693 (1989)
Kohonen: Self-organized Formation of Feature Maps. Cybem. Syst. Recognit., Learn, 3–12 (1984)
Fogel, D.B., Fogel, L.J.: An introduction to simulated evolutionary optimization. IEEE Trans. on Neural Networks 5(1), 3–14 (1994)
Li, S., Kwok, J.T., Wang, Y.: Multi-focus image fusion using artificial neural networks. Pattern Recognition Letters 23, 985–997 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wu, Y., Liu, C., Liao, G. (2005). Multi-focus Image Fusion Based on SOFM Neural Networks and Evolution Strategies. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3612. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539902_1
Download citation
DOI: https://doi.org/10.1007/11539902_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28320-1
Online ISBN: 978-3-540-31863-7
eBook Packages: Computer ScienceComputer Science (R0)