A new look at IHS-like image fusion methods
Introduction
With the development of new imaging sensors a meaningful fusion method for all employed imaging sources becomes necessary. Image fusion is a novel means of combining the spectral information of a coarse-resolution image with the spatial resolution of a finer image. The resulting merged image is a product that synergistically combines the best features of each of its components. The benefit of merged image has been demonstrated in many practical applications, especially for vegetation, land-use, precision farming, and urban studies. For local environmental applications, the high-resolution CARTERRA images from the IKONOS satellite can now currently be acquired in two different modes: the panchromatic (Pan) mode with high spatial resolution of 1 m and the multispectral (MS) mode with a four times coarser ground resolution. To take the advantage of CARTERRA images, it is important to determine an optimum merging approach for the current task.
Various methods for image fusion [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14] have been described earlier. According to their efficiency and implementation, intensity-hue-saturation (IHS) method [1], [2], [3], principal component analysis (PCA) [2], [4] and Brovey transform (BT) [2], [9] are most commonly used algorithms in remote sensing community. However, the problem of color distortion appears at the analyzed area after transformed by using these fusion methods. Compared to these methods, the wavelet transform (WT) with multiresolution decomposition [5], [6], [7], [10], [11], [12], [13] is a relatively new approach. The WT can characterize the local variance at different scales due to its changing resolutions in both the spatial and spectral domains.
Until now, these methods have only been independently evaluated by some statistical metrics and have seldom been compared quantitatively with each other, in both spectral and spatial features. For instance, all of the IHS, PCA, and BT can keep the same spatial resolution as the Pan image but distort the spectral (color) characteristics with different degree. In contrast to above methods, the WT is the easiest method to control the trade-off between the spatial and spectral information. However, WT method preserves more spectral information but looses more spatial information, this needs to be investigated. Under this background, the present work was initiated. A detailed study indicated that the color distortion problem arises from the change of the saturation during the fusion process. Experimental results for distinct fusion methods are also demonstrated in the present paper.
Section snippets
The RGB–IHS conversion model
Several different mathematical representations of the transformation can convert RGB tristimulus values into the parameters of human color perception and vice versa [14], [15]. Beyond computational speed, these algorithms differ mainly in the choice of coordinate systems (cylindrical or spherical coordinates), the primary color used as the hue reference point, and the method used to calculate the intensity component of the transformations.
To understand the color distortion during the image
A generalized IHS image fusion (GIHS)
To compare various image fusion methods in RGB–IHS space, we first introduce a unifying image fusion method called generalized IHS (GIHS). By using GIHS, the low-resolution intensity component (I0) in IHS space is replaced by a gray-level image with higher spatial resolution (Inew) and transformed back into the original RGB space with the original H and S components in (2). That is,
Step 1:
Step 2: I0 is replaced by Inew.
Step 3:
IHS image fusion
The IHS [1], [2], [3] is one of the widespread image fusion methods in the remote sensing community and has been employed as a standard procedure in many commercial packages. According to the fusion framework described in the previous section, we know that IHS is the intrinsic method in GIHS when Inew is replaced by the high-resolution Pan image. Following Eq. (5), a computationally efficient IHS method without the coordinate transformation can be given as
Experimental results
The most widely applied fusion procedure is the merging of panchromatic SPOT image with three-color SPOT imagery or multispectral LANDSAT TM imagery. The first experiment presents an example of a fused SPOT image to highlight the color distortion problem. The test images used herein includes a 10-m resolution panchromatic image and three 20-m color images of a rural area in Taichung, Taiwan collected on March 4, 1994 by the SPOT satellite. The size of the test image is 512×512 pixels for the
Conclusions
Various image fusion techniques have been conducted to merge different sensor data. With the development of new imaging sensors, however, image fusion becomes an important technique, which is capable of quickly merging the massive volume of data while simultaneously preserving most information. Until now, these contemporary methods have only been independently evaluated by some statistical metrics, and have seldom been compared quantitatively with each other, in both spectral and spatial
Acknowledgements
The authors would like to thank Space Imaging company for providing the IKONOS data. We also thank the National Science Council of the Republic of China for financial support under Contract No. NSC 89-2213-E-014-013. Finally, the authors would like to thank the anonymous reviewers for their comments which helped to improve the paper quality and presentation.
References (18)
- et al.
The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data
Photogramm. Eng. Remote Sensing
(1990) - et al.
Comparison of three difference methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic
Photogramm. Eng. Remote Sensing
(1991) - et al.
The use of intensity-hue-saturation transformation for producing color shaded relief images
Photogramm. Eng. Remote Sensing
(1994) - et al.
Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis
Photogramm. Eng. Remote Sensing
(1989) - et al.
Using iterated rational filter banks within the ARSIS concepts for producing 10 m Landsat multispectral images
Int. J. Remote Sensing
(1998) - et al.
Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation
Photogramm. Eng. Remote Sensing
(2000) - et al.
Multiresolution-based imaged imaged fusion with additive wavelet decomposition
IEEE Trans. Geosci. Remote sensing
(1999) - et al.
Multisensor image fusion in remote sensing: concepts, methods and applications
Int. J. Remote Sensing
(1998) - ER Mapper 5.0 Reference, Earth Resource Mapping Pty Ltd,...
Cited by (882)
DBCT-Net:A dual branch hybrid CNN-transformer network for remote sensing image fusion
2023, Expert Systems with ApplicationsA comprehensive review of spatial-temporal-spectral information reconstruction techniques
2023, Science of Remote SensingPAPS: Progressive Attention-Based Pan-sharpening
2024, IEEE/CAA Journal of Automatica SinicaA Spatio-Spectral Fusion Method for Hyperspectral Images Using Residual Hyper-Dense Network
2024, IEEE Transactions on Neural Networks and Learning Systems