Skip to main content
Log in

Context enhancement to reveal a camouflaged target and to assist target localization by fusion of multispectral surveillance videos

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Camouflage is an attempt to conceal a target by making it look similar to the background to make the detection and recognition of the target difficult for the observer (a man or a machine or both). Detecting a camouflaged target in a video, captured in visible band is a big challenge. One can use the infrared (IR) video where the visibility of the same target is much better, but the problem with IR band is that most of the contrast, color, and edge information about the background are lost and thus localizing the target becomes difficult. The objective of this work is to fuse registered videos, captured in visible and IR band such that the target is no longer camouflaged and hence clearly visible to the human monitor. More importantly, this should be done without losing the background details. We have proposed four different video fusion methods. All the proposed methods are intensity invariant so that camouflaged targets are detected independent of the illumination conditions. The performance of these methods has been compared using Wang–Bovik and Petrovic–Xydeas fusion metric along with other information theoretic indices. Experimental video results show that the proposed methods improve the perception and thus facilitates detection and localization of a camouflaged target. Moreover, the fused video has minimum artefacts as indicated by the highest peak signal-to-noise ratio, which is one of the most desired quality of a good fusion method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Wald, L.: Data fusion definitions and architectures fusion of images of different spatial resolutions. Ecole des Mines de Paris, 2002, Ch. 1, pp. 11–18

  2. Blum R., Liu Z.: Multi-Sensor Image Fusion and Its Applications. CRC Press, London (2005)

    Google Scholar 

  3. Bennett E.P., Mason J.L., McMillan L.: Multispectral bilateral video fusion. IEEE Trans. Image Process. 16(5), 1185–1194 (2007)

    Article  MathSciNet  Google Scholar 

  4. Tankus A., Yeshurun Y.: A model for visual camouflage breaking BMCV 2000. LNCS 1811, 139–149 (2000)

    Google Scholar 

  5. Dixon, T., Li, J., Noyes, J., Troscianko, T., Nikolov, S., Lewis, J., Canga, E., Bull, D., Canagarajah, C.: Scanpath assessment of visible and infrared side-by-side and fused video displays. In: 10th International Conference on Information Fusion, July 2007, pp. 1–8 (2007)

  6. Chen, S., Zhu, W., Leung, H.: Thermo-visual video fusion using probabilistic graphical model for human tracking. In: IEEE International Symposium on Circuits and Systems, May 2008, pp. 1926–1929 (2008)

  7. Leykin, A., Ran, Y., Hammoud, R.: Thermal-visible video fusion for moving target tracking and pedestrian classification. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, June 2007, pp.1–8 (2007)

  8. Waxman, A., Fay, D.A., Aguilar, M., Ireland, D.B., Racamato, J.P., Streilien, W.W., Braun, M.I.: Fusion of multi-sensor imagery for night vision: color visualization, target learning and search. In: Proceedings of the Third International Conference on Information Fusion Paris, 2000, pp. TUm/3–TUm/lO (2000)

  9. Waxman, A.M., Fay, D.A., Gove, A.N., Seibert, M., Racamato, J.P., Carrick, J.E., Savoye, E.D.: Color night vision: fusion of intensified visible and thermal IR imagery. In: Proceedings of SPIE, vol. 2463, pp. 58–68 (1995)

  10. Anwaar-ul-Haq, Gondal, I., Murshed, M.: Automated multi-sensor color video fusion for nighttime video surveillance. In: ISCC, The IEEE symposium on Computers and Communications, 2010, pp. 529–534 (2010)

  11. Li, G., Wang, K.: Applying daytime colours to nighttime imagery with an efficient colour transfer method. In: Verly, J.G., Guell, J.J. (eds.) Enhanced and Synthetic Vision, Orlando, FL, USA, 2007. The International Society for Optical Engineering, Bellingham, pp. 65590L-12 (2007)

  12. Reinhard E., Ashikhmin M., Gooch B., Shirley P.: Color transfer between images. IEEE Comput. Graph. Appl. 21, 34–41 (2001)

    Article  Google Scholar 

  13. Welsh T., Ashikhmin M., Mueller K.: Transferring color to greyscale images. ACM Trans. Graph. 21, 277–280 (2002)

    Article  Google Scholar 

  14. Toet A.: Natural colour mapping for multi band nightvision imagery. Inf. Fusion 4, 155–166 (2003)

    Article  Google Scholar 

  15. Heisele, B., Kressel, U., Ritter, W.: Tracking non-rigid, moving objects based on color cluster flow. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997, pp. 257 (1997)

  16. Pérez P., Hue C., Vermaak J., Gangnet M.: Color-based probabilistic tracking. ECCV 2002, Lecture Notes in Computer Science 2350/2002, 661–675 (2002)

    Google Scholar 

  17. Comaniciu D., Ramesh V., Meer P.: Kernel-based object tracking. In: IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–575 (2003)

    Article  Google Scholar 

  18. Masini, A., Branchitta, F., Diani, M., Corsini, G.: Sight enhancement through video fusion in a surveillance system. In: IEEE 14th International Conference on Image Analysis and Processing, pp. 554–559 (2007)

  19. Liu, Z., Laganiere, R.: Context enhancement through infrared vision-a modified fusion scheme. VIVA Laboratory, University of Ottawa, Canada, pp. 293–301 (2007)

  20. Simoncelli E.P., Freeman W.T.: The steerable pyramid: a flexible architecture for multi-scale derivative computation. Proceedings of International Conference on Image Processing 3, 444–447 (1995)

    Article  Google Scholar 

  21. Nikolov S., Hill P., Bull D., Canagarajah N.: Wavelets for image fusion. In: Petrosian, A., Meyer, F. (eds) Wavelets in Signal and Image Analysis, Computational Imaging and Vision Series, pp. 213–244. Kluwer, The Netherlands (2001)

    Chapter  Google Scholar 

  22. Guan-qun T., Da-peng L., Guang-hua L.: Application of wavelet analysis in medical image fusion. J. Xidian Univ. 31, 82–86 (2004)

    Google Scholar 

  23. Shangli, C., Junmin, H., Zhongwei, L.: Medical image of PET/CT weighted fusion based on wavelet transform. In: International Conference on Bioinformatics and Biomedical Engineering (ICBBE), pp. 2523–2525 (2008)

  24. Shah, P., Merchant, S.N., Desai, U.B.: Multifocus and multispectral image fusion based on pixel significance using multiresolution decomposition. Signal Image Video Process. J. (SIViP), Springer, March (2011)

  25. Shah P., Merchant S.N., Desai U.B.: Fusion of surveillance images in infrared and visible band using curvelet, wavelet and wavelet packet transform. Int. J. Wavelets, Multiresolut. Inf. Process. 8(2), 271–292 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ganzalo P., Jesus M.: Wavelet-based image fusion tutorial. Pattern Recognit. 8(37), 1855–1872 (2004)

    Google Scholar 

  27. Aroutchelvame, S.M., Raahemifar, K.: Architecture of wavelet packet transform for 1-D signal. In: IEEE Canadian Conference on Electronics and Computer Engineering, pp. 1304–1307 (2005)

  28. Candes, E. J., Donoho, D.: Curvelets: a surprisingly effective nonadaptive representation for objects with edges. In: Schumaker, L.L., et al. (eds.) Curves and Surfaces. Vanderbilt University Press, Nashville, pp. 105–120 (2000)

  29. Candes, E. J., Donoho, D.: New tight frames of curvelets and optimal representations of objects with smooth singularities. Technical report, Stanford Universisty (2002)

  30. Candes, E. J., Donoho, D.: New multiscale transforms, minimum total variation synthesis: applications to edge-preserving image reconstruction. Signal processing, pp. 1519–1543 (2002)

  31. Li S., Yang B.: Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recognit. Lett. 29, 1295–1301 (2008)

    Article  Google Scholar 

  32. Wang Z., Bovik A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)

    Article  Google Scholar 

  33. Piella, G., Heijmans, H.: A new quality metric for image fusion. In: IEEE International Conference on Image Processing, pp. 173–176 (2003)

  34. Arathi, T., Soman, K. P.: Performance evaluation of information theoretic image fusion metrics over quantitative metrics. In: International Conference on Advances in Recent Technologies in Communication and Computing, pp. 225-227 (2009)

  35. Petrovic V., Xydeas C.: Objective image fusion performance characterisation. Proceedings of International Conference on Computer Vision 2, 1866–1871 (2005)

    Google Scholar 

  36. Tsagaris V., Anastassopoulos V.: Global measure for assessing image fusion methods. Opt. Eng. SPIE 45(2), 026201–1–026201–8 (2006)

    Google Scholar 

  37. Petrovic V., Xydeas C.: Objective evaluation of signal-level image fusion performance. Opt. Eng. SPIE 44(8), 087003–1–087003–8 (2005)

    Google Scholar 

  38. Pohl C., Genderen J.L.: Multisensor image fusion in remote sensing: concepts, methods and applications. Int. J. Remote Sens. 19(5), 823–854 (1998)

    Article  Google Scholar 

  39. Amer A., Dubois E.: Fast and Reliable Structure-Oriented Video Noise Estimation. IEEE Trans. Circuits Syst. Video Technol. 15(1), 113–118 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Parul Shah.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shah, P., Reddy, B.C.S., Merchant, S.N. et al. Context enhancement to reveal a camouflaged target and to assist target localization by fusion of multispectral surveillance videos. SIViP 7, 537–552 (2013). https://doi.org/10.1007/s11760-011-0257-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-011-0257-1

Keywords

Navigation