Fusion of TerraSAR-x and Landsat ETM+ data for protected area mapping in Uganda

https://doi.org/10.1016/j.jag.2014.12.012Get rights and content

Highlights

  • We use a synergy of TerraSAR-X and Landsat ETM for land cover mapping.

  • High pass filtering and principal component analysis are used as fusion techniques.

  • 13 Land cover classes are mapped with highest overall accuracy of 85.35%.

  • High potential exist for a synergy of TerraSAR-X and Landsat for land cove mapping.

Abstract

TerraSAR-X satellite acquires very high spatial resolution data with potential for detailed land cover mapping. A known problem with synthetic aperture radar (SAR) data is the lack of spectral information. Fusion of SAR and multispectral data provides opportunities for better image interpretation and information extraction. The aim of this study was to investigate the fusion between TerraSAR-X and Landsat ETM+ for protected area mapping using high pass filtering (HPF), principal component analysis with band substitution (PCA) and principal component with wavelet transform (WPCA). A total of thirteen land cover classes were identified for classification using a non-parametric C 4.5 decision tree classifier. Overall classification accuracies of 74.99%, 83.12% and 85.38% and kappa indices of 0.7220, 0.8100 and 0.8369 were obtained for HPF, PCA and WPCA fusion approaches respectively. These results indicate a high potential for a combined use of TerraSAR-X and Landsat ETM+ data for protected area mapping in Uganda.

Introduction

Remote sensing process can be classified as either optical or microwave depending on the sensor or portion of electromagnetic spectrum used for data acquisition (Aplin, 2003, Mather, 2004). The proliferation of satellite sensors has resulted in enormous volumes of data available for scientific studies. Image fusion is an approach that involves combining two or more images using an algorithm to generate a new image which is more suitable for human visual interpretation (Li et al., 1995, Wald, 1999). It is regarded as an effective way of optimizing the use of large volume of data from multiple sensors (Dong et al., 2009).

Over the past decades, the fusion of multi-sensor data has received attention particularly in the field of remote sensing (Ban, 2003, Mcnairn et al., 2009, Metternicht and Zinck, 1998) and has resulted in different terms being coined such as merging, combination, synergy and integration which all mean the same thing (Wang et al., 2005). Some of the image fusion approaches involve a combination of either optical and microwave data or high and low spatial resolution optical data. This is because remote sensing data products acquired by different sensors have different strengths and weaknesses. Optical remote sensing for example data acquired by the Landsat TM/ETM+ have low cost with moderate spatial and spectral resolutions. The high spectral and spatial resolutions are essential for characterization of land cover at local and regional scales. However, in spite of these advantages, a well-known problem of using optical remote sensing products particularly in the tropical areas is the frequent cloud cover (Asner, 2001). Consequently, obtaining cloud free optical remote sensing images is difficult and this limits its potential for applications that require regular data acquisition. Active microwave sensors on the other hand provide their own source of illumination with longer wave lengths that can penetrate through atmospheric conditions including fog, smoke, rain, precipitation and clouds (Haack and Bechdol, 2000). The advances in microwave remote sensing provide a viable option for acquiring data to supplement the data obtained using optical remote sensing instruments.

There are available studies that have been conducted to examine the different image fusion methods (Kekre et al., 2013, Pohl and Van Genderen, 2008, Dong et al., 2009). These methods can be broadly characterised as colour related, statistical or numerical. Colour related approaches are those that deal with the composition of three image bands in the RGB colour space. It also includes sophisticated approaches utilizing hue, saturation and value (Tu et al., 2001, Al-wasai et al., 2011). A major limitation of colour related approaches is the number of bands that is used for RGB colour composition. Statistical image fusion techniques are those that rely on band statistics and notable among these is the principal component Analysis (PCA) fusion approach (Shah and Younan, 2008). The numerical based approaches for image fusion are based on wavelet theory for multi-resolution analysis (Amolins et al., 2007).

Pohl and Van Genderen (2008) provide several reasons for image fusion including: (1) image sharpening, (2) improvement of geometric correction, (3) providing stereo-viewing capability for stereo-photogrammetry, (4) enhancing features not visible in either of the images, (5) complement datasets for improved classification, (6) detecting changes from multi-temporal imagery, (7) substituting missing information and (8) replacing defective data. Accordingly, image fusion aims at combining different, but complimentary information apparent in both images as well as improving the reliability of image interpretation. Furthermore, combining multi-sensor data enables the extraction of more information than any single specific sensor used alone (Chen et al., 2003).

Regardless of the method selected for analysis, the implementation of image fusion can be performed at any of the three levels: pixel, feature and decision levels (Pohl and Van Genderen, 1998). Image fusion at pixel level entails merging at the lowest processing level (Kuplich et al., 2000). The success of this operation requires proper co-registration of the images as well as the choice of appropriate re-sampling and interpolation method. Resampling computes the new pixel values from the original distorted image using one of the three common approaches namely: nearest neighbour, bilinear interpolation and cubic convolution (Gupta, 1991). At feature level, the fusion is performed using objects derived on the basis of segmentation algorithms. The resulting features from multiple data sources with similar characteristics such as shape, extents and neighbourhood are assigned and fused using statistical approaches such as the Artificial Neural Networks (ANNs) (Pohl and Van Genderen, 1998). Decision level image fusion involves the processing of separate images to derive information prior to image fusion. The derived value added products are fused using decision rules to facilitate improved interpretation and understanding of the observed classified objects.

Advances in the field of microwave remote sensing have enabled the acquisition of new products by sensors such as those on-board TSX satellite. This has provided more opportunities for environmental analysis and understanding. While literature exists on image fusion approaches involving the use optical and SAR data for information extraction, there are few available studies involving a synergy of TSX and Landsat ETM+. Data obtained from the TSX satellite has very high spatial resolution of up to 1 m and it is not affected by cloud cover except during heavy rainfall. This makes it attractive for land cover mapping and monitoring particularly in the tropical regions of Africa. The purpose of this study was to assess the potential of image fusion between TSX and Landsat ETM+ for enhancing land cover information extraction.

Section snippets

Study area

The study area is the Bwindi Impenetrable National Park (BINP) located in the south-western part of Uganda cutting across the districts of Kabale, Rukungiri, Kisoro and Kanungu (Fig. 1). It is bounded by latitudes (0°53′–1°08′) S and longitudes (29°35′–29°50′) E, with an estimated area of 331 square kilometres. The BINP is part and parcel of the highest blocks of Kigezi and Rukiga highlands and lies at the edge of the Great Western Rift Valley. A small area of the park stretches to the East of

Data acquisition

Two overlapping Enhanced Ellipsoidal Corrected (EEC) TerraSAR-X (TSX) and Landsat ETM+ images with spatial resolutions of 2.75 m × 2.75 m and 30 m × 30 m, respectively, were selected for analysis. The TSX images were acquired on the 4th and 15th of December, 2009. These images were obtained courtesy of the UNESCO-ESA open initiative to use space technologies to support World heritage convention, under project number LAN0559. In contrast, the Landsat ETM+ images, acquired in February 21, 2005 were

Results

The classification results of the fused TSX and Landsat ETM+ images are summarised in Table 1 and Fig. 3, Fig. 4. Table 1 compares the classification accuracies obtained from the selected image fusion techniques while Fig. 3 shows classification accuracy of the separate classes. Land cover classification maps are shown in Fig. 4. All the thirteen land cover classes including the no data class were identified and classified. Image fusion based on high pass filtering resulted in low overall

Discussion

Image fusion based on HPF, PCA and WPCA resulted in overall accuracies 74.99%, 83.12% and 85.38% and corresponding kappa indices of 0.7220, 0.8100 and 0.8369. Among the three selected image fusion methods, the WPCA provided the highest overall classification accuracy. The HPF image fusion classified some classes with low classification accuracies for example the MF (48.6%), DEGF (61.9%), TP (65.5%), DW (66.4%) and VW (54.3%). This resulted in low overall classification accuracy and Kappa

Conclusion and recommendations

This study aimed at examining the synergy of TerraSAR-X and Landsat ETM+ for land cover mapping using decision tree classifier. Three image fusion techniques were explored: high pass filtering, principal component analysis with band substitution and principal component with wavelet transform. The most appropriate image fusion method was that involving the use of wavelet transform since it provided the highest classification. Overall, there is a high potential for the combined use of Landsat

Acknowledgements

The authors would like to thank the German Space Agency (DLR), the United Nations Educational Scientific and Cultural Organisation (UNESCO), and University of Maryland for providing the TSX and Landsat ETM+ data used in the study.

References (27)

  • P. Aplin

    Remote sensing: base mapping

    Prog. Phys. Geog.

    (2003)
  • G.P. Asner

    Cloud cover in Landsat observations of the Brazilian Amazon

    Int. J. Remote Sens.

    (2001)
  • Y.F. Ban

    Synergy of multitemporal ERS-1 SAR and Landsat TM for classification of agricultural crops

    Can. J. Remote Sens.

    (2003)
  • Cited by (0)

    View full text