Elsevier

Information Fusion

Volume 8, Issue 2, April 2007, Pages 168-176
Information Fusion

Objectively adaptive image fusion

https://doi.org/10.1016/j.inffus.2005.10.002Get rights and content

Abstract

Signal-level image fusion has been the focus of considerable research attention in recent years with a plethora of algorithms proposed, using a host of image processing and information fusion techniques. Yet what is an optimal information fusion strategy or spectral decomposition that should precede it for any multi-sensor data cannot be defined a priori. This could be learned by either evaluating fusion algorithms subjectively or indeed through a small number of available objective metrics on a large set of relevant sample data. This is not practical however and is limited in that it provides no guarantee of optimal performance should realistic input conditions be different from the sample data. This paper proposes and examines the viability of a powerful framework for objectively adaptive image fusion that explicitly optimises fusion performance for a broad range of input conditions. The idea is to employ the concepts used in objective image fusion evaluation to optimally adapt the fusion process to the input conditions. Specific focus is on fusion for display, which has broad appeal in a wide range of fusion applications such as night vision, avionics and medical imaging. By integrating objective fusion metrics shown to be subjectively relevant into conventional fusion algorithms the framework is used to adapt fusion parameters to achieve optimal fusion display. The results show that the proposed framework achieves a considerable improvement in both level and robustness of fusion performance on a wide array of multi-sensor images and image sequences.

Introduction

Multiple sensor modalities are fast becoming ubiquitous in a wide range of imaging applications. Enhanced performance and increased robustness provided by multi-sensor arrays however come at the price of a considerable increase in the amount of data that needs to be processed. Fully automated systems require serious computing power to deal with this “data overload” while in cases where multi-sensor imagery is used for display, viewing multiple sensor modalities simultaneously places an unnecessary load on the observer and integrating information across a group of observers becomes almost impossible [1].

Signal-level image fusion presents an effective solution for this data overload by combining multiple image signals into a single, fused one with all the input information. This has made image fusion a focus of research and a plethora of algorithms have been proposed based on a range of image processing techniques and information fusion strategies [1], [2], [3], [4], [5], [6], [7], [8], [9], [10]. The simplest way to obtain a fused image is to average the inputs. However, this is prone to destructive superposition (loss of contrast). Multi-scale and multi-resolution approaches [1], [2], [3], [4], [5], [6], [7], [8] avoid this by decomposing the input images into representations made up of series of sub-signals containing features in narrow ranges of scale (and/or orientations), with (multi-scale) or without spatial redundancy (multi-resolution). Such input image representations (called also image pyramids) are then merged into a new fused pyramid using a particular information fusion strategy and reconstructed to produce the fused image. Early multi-resolution fusion methods used functions of the Gaussian pyramid such as the contrast [1], Laplacian [5] and gradient (with orientation sensitivity) [8] to fuse multi-sensor images. Discrete wavelet transform (DWT) was also considered as a platform for multi-resolution fusion [2], [3], [4], [5], along with a related differential approach in [6]. Multi-scale approach was used with DWT in [4], [5].

Information fusion strategy, as in the context of image pyramid fusion, was examined in [5], [7], which concluded that optimal fusion is achieved when information from higher levels of abstraction (e.g. meaningful segment boundaries) is considered. However both advanced pre-processing and fusion algorithms require parameters to be carefully selected as they may not always provide satisfactory performance. Some have dealt with this by seeking optimal fusion set-up offline based on a sample of data [5], [7]. However what is an optimal information fusion strategy or the spectral decomposition that should precede it for any multi-sensor data cannot as yet be defined a priori with a sufficient guarantee of satisfactory performance in a real application. This paper proposes and examines the viability of a powerful framework for objectively adaptive image fusion that explicitly optimises fusion performance for a broad range of input conditions. The idea is to employ the concepts used in objective image fusion evaluation to optimally adapt the fusion process to any input conditions and avoid the pitfall of offline tuning of fusion to a particular type of image content. Specific focus meanwhile is on fusion for display which has broad appeal in a wide range of fusion applications such as medical imaging, night vision, avionics and remote sensing.

Fusion process adaptation is examined in two broad scenarios: the fusion of still images and fusion of multi-sensor sequences. The goal is to develop methods of integrating objective fusion evaluation concepts into the fusion process rather than applying the objective metrics externally in a brute force optimisation of fusion parameters (sure to yield a good fusion but impractical). In particular, two distinct integration mechanisms are proposed for multi-resolution and multi-scale fusion that guide the information fusion process and optimally adapt and control an iterative multi-scale sequence fusion. It is shown that the proposed mechanisms provide considerable improvements in both still and image sequence fusion performance. In Section 2, objective fusion evaluation is introduced briefly and its applicability to the proposed fusion process optimisation examined. Section 3 proposes the integration of objective evaluation concepts into the fusion process while Section 4 presents the results of the proposed fusion frameworks on a comprehensive set of multi-sensor data. The paper is concluded in Section 5.

Section snippets

Objective fusion evaluation

The recent proliferation of systems employing image fusion algorithms has prompted the need for reliable ways of evaluating and comparing their performance for any given application or scenario. However, assessing image fusion performance, particularly when the intended use is to produce a visual display, has proved hard in practice. Evaluation is usually performed through robust yet impractical subjective trials [11], [12] that can take days or even weeks to complete. Objective fusion metrics

Still image fusion

Multi-scale and multi-resolution approaches are the most commonly used to fuse a pair of multi-sensor images [1], [2], [3], [4], [5], [6], [7], [8]. This is due to robust representation of input features through separation according to scale and/or orientation, which provides robust fusion performance. They are thus considered here with the aim of providing objectively adaptive fusion by incorporating objective fusion evaluation concepts into such fusion systems.

The key to successful fusion is

Results

Objectively adaptive fusion framework proposed in the previous section is evaluated on a large data set of multi-sensor imagery involving over 165 still multi-sensor image pairs and six multi-sensor sequences [19]. The aim is to evaluate the proposed framework on a reasonably broad range of fusion for display applications including remote sensing, medical imaging, night vision, surveillance and avionics.

Conclusions

This paper deals with the issue of objective adaptation of image fusion with the aim of achieving optimal fusion performance for any input conditions. This is done by employing concepts used in the objective fusion evaluation in the process of fusion. Techniques for incorporating objective evaluation concepts into both the process of still and image sequence fusion are proposed. It was shown that including mutual input representation estimates in the information fusion strategy improves the

References (19)

There are more references available in the full text version of this article.

Cited by (0)

View full text