Elsevier

Information Fusion

Volume 11, Issue 2, April 2010, Pages 69-77
Information Fusion

Fast natural color mapping for night-time imagery

https://doi.org/10.1016/j.inffus.2009.06.005Get rights and content

Abstract

We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera’s) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.

Introduction

Night vision cameras are a vital source of information for a wide-range of critical military and law enforcement applications related to surveillance, reconnaissance, intelligence gathering, and security. The two most common night-time imaging systems are low-light-level (e.g. image-intensified) cameras, which amplify the reflected visible to near-infrared (VNIR) light, and thermal infrared (IR) cameras, which convert thermal energy from the midwave (3–5 μm) or the long wave (8–12 μm) part of the spectrum into a visible image.

Until recently a gray- or green-scale representation of nightvision imagery has been the standard. However, the increasing availability of multispectral imagers and sensor systems (e.g. [1], [2], [3], [4], [5]) has led to a growing interest in the (false) color display of night vision imagery [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26].

In principle, color imagery has several benefits over monochrome imagery for surveillance, reconnaissance, and security applications. The human eye can only distinguish about 100 shades of gray at any instant. As a result, grayscale nightvision images are usually hard to interpret and may give rise to visual illusions and loss of situational awareness. Since people can discriminate several thousands of colors defined by varying hue, saturation, and brightness, a false-color representation may facilitate nightvision image recognition and interpretation. For instance, color may improve feature contrast, which allows for better scene segmentation and object detection [27]. This may enable an observer to construct a more complete mental representation of the perceived scene, resulting in better situational awareness. It has indeed been found that scene understanding and recognition, reaction time, and object identification are faster and more accurate with color imagery than with monochrome imagery [12], [28], [29], [30], [31], [32], [33], [34]. Also, observers are able to selectively attend to task-relevant color targets and to ignore non-targets with a task-irrelevant color [35], [36], [37]. As a result, simply producing a false-color nightvision image by mapping multiple spectral bands into a three dimensional color space already generates an immediate benefit, and provides a method to increase the dynamic range of a sensor system [38].

However, the quality of a color rendering is determined by the task at hand. Although general design rules can be used to assure that the information available in the sensor image is optimally conveyed to the observer [39], it is not trivial to derive a mapping from the various sensor bands to the three independent color channels, especially when the number of bands exceeds three (e.g. with hyperspectral imagers [40]). In practice, many tasks may benefit from a representation that renders the scene in daytime colors. Jacobson and Gupta [39], [40] therefore advise to use a consistent color mapping according to a natural palette. The use of natural colors facilitates object recognition by allowing access to stored color knowledge [41]. Experimental evidence indicates that object recognition depends on stored knowledge of the object’s chromatic characteristics [41]. In natural scene recognition paradigms, optimal reaction times and accuracy are obtained for normal natural (or diagnostically) colored images, followed by their grayscale version, and lastly by their (nondiagnostically) false-colored version [28], [29], [31], [34], [42]. When sensors operate outside the visible waveband, artificial color mappings generally produce false-color images whose chromatic characteristics do not correspond in any intuitive or obvious way to those of a scene viewed under natural photopic illumination. As a result, this type of false-color imagery may disrupt the recognition process by denying access to stored knowledge. In that case observers need to rely on color contrast to segment a scene and recognize the objects therein. This may lead to a performance that is even worse compared to single band imagery alone [43]. Experiments have indeed convincingly demonstrated that a false color rendering of night-time imagery which resembles natural color imagery significantly improves observer performance and reaction times in tasks that involve scene segmentation and classification [10], [44], [45], [46], [47], [48], whereas color mappings that produce counterintuitive (unnaturally looking) results are detrimental to human performance [44], [45], [49]. One of the reasons often cited for inconsistent color mapping is a lack of physical color constancy [45]. Thus, the challenge is to give nightvision imagery an intuitively meaningful (“naturalistic”) and stable color appearance, to improve the viewer’s scene comprehension and enhance object recognition and discrimination [22].

Several techniques have been proposed to render night-time imagery in color (e.g. [15], [26], [50], [51], [52]).

Simply mapping the signals from different night-time sensors (sensitive in different spectral wavebands) to the individual channels of a standard color display or to the individual components of perceptually de-correlated color spaces, sometimes preceded by principal component transforms or followed by a linear transformation of the color pixels to enhance color contrast, usually results in imagery with an unnatural color appearance (e.g. [13], [23], [25], [49], [53]).

More intuitive color schemes may be obtained by using more elaborate false-color mappings ([54]), or by opponent processing through feedforward center-surround shunting neural networks similar to those found in vertebrate color vision [6], [7], [11], [16], [17], [18], [21], [55], [56]. Although this approach produces fused night-time images with optimal color contrast, the resulting color schemes remain arbitrary and are usually not related to the actual daytime color scheme of the scene that is registered.

Toet [15] presented a color mapping that gives night-time imagery a natural color appearance. This method matches the first order statistical properties (mean and standard deviation) of nightvision imagery to those of a target daylight color image. As a result, the color appearance of the colorized nightvision image resembles that of the natural target image. The composition of the target image should therefore be similar to that of the nightvision scene (i.e. both images should contain similar details in similar proportions). When the composition (and thus the overall color statistics) of the target image differs significantly from that of the nightvision image, the resulting colorized nightvision image may look unnatural (the color scheme will be biased to that of the target image). For instance, the colorized nightvision image may appear too greenish if the target image contains significantly more vegetation than the nightvision image. Thus, when panning, object colors may change over time. Hence, this method has two properties which make it less suitable for practical implementation: it is computationally expensive and it does not provide consistent object colors (object colors depend on scene context).

Zheng and Essock [57] recently introduced an improvement to Toet’s [15] method. This new “local-coloring” method renders a nightvision image segment-by-segment by using image segmentation, pattern recognition, histogram matching and image fusion. Specifically, a false-color image (source image) is formed by assigning the multi-band nightvision image to three RGB (red, green and blue) channels. A nonlinear diffusion filter is then applied to the false-colored image to reduce the number of colors. The final grayscale segments are obtained by using clustering and merging techniques. With a supervised nearest-neighbor paradigm, a segment can be automatically associated with a known “color scheme”. The statistic-matching procedure is merged with the histogram-matching procedure to enhance the color mapping effect. Instead of extracting the color set from a single target image, the mean, standard deviation and histogram distribution of the color planes from a set of natural scene images are used as the target color properties for each color scheme. The target color schemes are grouped by their scene contents and colors such as plants, mountain, roads, sky and water. Regarding computational complexity, the local-coloring method is even more expensive than Toet’s [15] original color transform, since it involves time-consuming procedures such as nonlinear diffusion, color space transform, histogram analysis and wavelet-based fusion.

Most of the aforementioned color mapping techniques (1) are not focused on using colors that closely match the daytime colors, (2) do not achieve color constancy, and (3) are computationally expensive. Although Toet’s [15] method yields natural color rendering, it is computationally expensive and achieves no color constancy. A recently presented extension of this method does achieve color constancy, at the expense of an increased computational complexity [57].

Here we will introduce a new simple and fast method to consistently apply natural daytime colors to multi-band nightvision imagery. First, we will present some improvements on Toet’s method [15] that yield consistent object colors and enable real-time implementation. Next, we present a new lookup-table based method that gives multi-band night-time imagery a consistent natural daytime color appearance. The method (for which a patent application is currently pending [58]) is simple and fast, and can easily be deployed in real-time. After explaining how the color transformation can be derived from a given multi-band sensor image and a corresponding daytime reference image, we will show how this color transformation can be deployed at night and implemented in real-time.

Section snippets

Statistical similarity

Toet [15] presented a method for applying natural colors to a multi-band night-time image. In this method certain statistical properties of a reference daytime image are transferred to the multi-band night-time image. First, two or three bands of a multi-band night-time image are mapped onto the RGB channels of a false-color image. The resulting false-color RGB nightvision image is then transformed into a perceptually de-correlated color space. In this color space the first order statistics

Discussion and conclusion

We have presented a new method for applying natural daylight colors to multi-band night-time images (a patent application for this method is currently pending: [58]). The method derives an optimal color transformation from a set of corresponding samples taken from a daytime color reference image and a multi-band night-time image. The colors in the resulting colorized multi-band night-time image closely resemble the colors in the daytime color reference image. Moreover, when the same color

References (60)

  • Y. Zheng et al.

    A local-coloring method for night-vision colorization utilizing image analysis and fusion

    Information Fusion

    (2008)
  • R. Breiter, W.A. Cabanski, K.-H. Mauk, W. Rode, J. Ziegler, H. Schneider, M. Walther, Multicolor and dual-band IR...
  • E. Cho et al.

    Development of a visible–NIR/LWIR QWIP sensor

  • M. Aguilar et al.

    Field evaluations of dual-band fusion for color night vision

  • M. Aguilar et al.

    Real-time fusion of low-light CCD and uncooled IR imagery for color night vision

  • L. Bai, W. Qian, Y. Zhang, B. Zhang, Theory analysis and experiment study on the amount of information in a color night...
  • S. Das, Y.-L. Zhang, W.K. Krebs, Color night vision for navigation and surveillance, in: J. Sutton, S.C. Kak (Eds.),...
  • E.A. Essock et al.

    Perceptual ability with real-world night-time scenes: image-intensified, infrared, and fused-color imagery

    Human Factors

    (1999)
  • D.A. Fay, A.M. Waxman, M. Aguilar, D.B. Ireland, J.P. Racamato, W.D. Ross, W. Streilein, M.I. Braun, Fusion of...
  • M.T. Sampson, An assessment of the impact of fused monochrome and fused color night vision displays on reaction time...
  • J. Schuler et al.

    Multiband E/O color fusion with consideration of noise and registration

  • A.M. Waxman et al.

    Color night vision: fusion of intensified visible and thermal IR imagery

  • A.M. Waxman, M. Aguilar, R.A. Baxter, D.A. Fay, D.B. Ireland, J.P. Racamoto, W.D. Ross, Opponent-color fusion of...
  • A.M. Waxman et al.

    Progress on color night vision: visible/IR fusion, perception and search, and low-light CCD imaging

  • A.M. Waxman

    Solid-state color night vision: fusion of low-light visible and thermal infrared imagery

    MIT Lincoln Laboratory Journal

    (1999)
  • G. Huang et al.

    Visual and infrared dual-band false color image fusion method motivated by Land’s experiment

    Optical Engineering

    (2007)
  • D. Scribner, P. Warren, J. Schuler, Extending color vision methods to bands beyond the visible, in: Proceedings of the...
  • J.G. Howard et al.

    Real-time color fusion of E/O sensors with PC-based COTS hardware

  • D. Scribner et al.

    Infrared color vision: an approach to sensor fusion

    Optics and Photonics News

    (1998)
  • D. Scribner et al.

    Sensor and image fusion

  • Cited by (72)

    • A survey of infrared and visual image fusion methods

      2017, Infrared Physics and Technology
      Citation Excerpt :

      In addition, because the operation time is long and the computing resources consuming is huge, which caused some adverse impacts for the actual application of IR and VI image fusion algorithm. There were some researchers who have been proposed some low-cost and quick image fusion algorithm to fulfil this kind of tasks which should get more attentions of scholars [28,70,101,146]. In view of this, we think some lightweight and effective IR and VI fusion scheme is very encouraging.

    View all citing articles on Scopus
    View full text