Integration of optical and synthetic aperture radar (SAR) images to differentiate grassland and alfalfa in Prairie area

https://doi.org/10.1016/j.jag.2013.10.003Get rights and content

Highlights

  • Proposed low cost earth observation sensor imagery: MODIS and ScanSAR, to identify spatial distribution of alfalfa to estimate its biofuel potential at a regional level.

  • Investigated an innovative wavelet and IHS integration image fusion technique in combing MODIS and ScanSAR.

  • Demonstrated the image fusion technique in improving grassland and alfalfa differentiation significantly compared with commonly used multi-source data combination methods.

Abstract

Alfalfa presents a huge potential biofuel source in the Prairie Provinces of Canada. However, it remains a challenge to find an ideal single satellite sensor to monitor the regional spatial distribution of alfalfa on an annual basis. The primary interest of this study is to identify alfalfa spatial distribution through effectively differentiating alfalfa from grasslands, given their spectral similarity and same growth calendars. MODIS and RADARSAT-2 ScanSAR narrow mode were selected for regional-level grassland and alfalfa differentiation in the Prairie Provinces, due to the high frequency revisit of MODIS, the weather independence of ScanSAR as well as the large area coverage and the complementary characteristics SAR and optical images. Combining MODIS and ScanSAR in differentiating alfalfa and grassland is very challenging, since there is a large spatial resolution difference between MODIS (250 m) and ScanSAR narrow (50 m). This study investigated an innovative image fusion technique for combining MODIS and ScanSAR and obtaining a synthetic image which has the high spatial details derived from ScanSAR and the colour information from MODIS. The field trip was arranged to collect ground truth to label and validate the classification results. The fusion classification result shows significant accuracy improvement when compared with either ScanSAR or MODIS alone or with other commonly-used data combination methods, such as multiple files composites. This study has shown that the image fusion technique used in this study can combine the structural information from high resolution ScanSAR and colour information from MODIS to significantly improve the classification accuracy between alfalfa and grassland.

Introduction

Biofuels, being advocated as a renewable, cost-effective alternative to petroleum-based liquid fuels, require development of a cellulosic-based biofuels industry (Campbell, 2012). Perennials rich in fibre are generally a suitable feedstock for bioenergy, while those abundant in foliage are efficient as a feed for livestock. Alfalfa (Medicago sativa L.) is proposed as a biofuel feedstock, since its stems could be processed to produce energy or fuel and the leaves used as a livestock feed (McCaslin and Miller, 2007). In order to understand and develop alfalfa biofuel potential the spatial distribution needs to be determined more accurately than currently available spatial information extraction methods.

SAR is used often in vegetation mapping, due to its independence from solar illumination, and its different imaging principles when compared with the optical image (Buckley, 2004, Smith and Buckley, 2011). The brightness of a SAR image depends on the roughness, geometry, and material contents of the targeted surface and the wavelength of SAR. The grey information from optical image represents the reflectance of solar energy from a target area (Jensen, 2005). Combing microwave and optical sensors can help in discriminating the different classes since they are complementary to each other (Pohl and Van Genderen, 1998). Many studies have combined optical image and microwave image to improve mapping accuracy in agricultural scenarios (Brisco et al., 1989, Schistad-Solberg et al., 1994, Brisco and Brown, 1995, Le Hegarat-Mascle et al., 2000, Ban, 2003, Blaes et al., 2005, Michael et al., 2005, McNairn et al., 2009). SAR and optical imagery can be integrated in different ways to improve the data and information content during image processing for information extraction. Image fusion is a technique which can combine the optical and SAR sensor data, prior to information extraction. The purpose of radar and optical image fusion is mainly for feature enhancement and confusion reduction (van der Sanden and Thomas, 2004, Schistad-Solberg et al., 1994). The roles of image fusion can be reflected in three ways:

  • (1)

    Take maximum advantage of the merits of each single sensor. As every sensor has disadvantages and advantages there is a potential synergy in integrating data from different sensors to take advantage of those sensors’ advantages without significantly distorting the desirable characteristics of every sensor (Lewis et al., 1998, Amarsaikhan and Douglas, 2004).

  • (2)

    Reduce information redundancy caused by multisource data (Schistad-Solberg et al., 1994). Images acquired over the same geographic area by different sensors have two possibilities: partially redundancy, since they cover the same geographic area; and partially complementarity since sensors cover different spectral ranges, which is very apparent when comparing optical and microwave images. The aim of image fusion is not only to use their complementarities to reduce confusion by getting a more complete description of land cover type features, but also to use multisource data redundancy to reduce imprecision and classification errors (Le Hegarat-Mascle et al., 2000), thus improving classification results (Tso and Mather, 2001).

  • (3)

    Un-mix mixed pixels. Generally, the spatial resolution of the selected SAR image is higher than the optical multispectral image. The fused image can un-mix the mixed pixels in the lower spatial resolution multispectral image to a certain level (Van der Meer, 1997, Robinson et al., 2000, Bachmann and Habermeyer, 2003, Hong et al., 2011).

This study collected early-season remote sensing images to differentiate perennials from annuals. An early-season MODIS and ScanSAR narrow mode image were selected in this study for regional level grassland and alfalfa differentiation in the Prairie area, due to grassland and alfalfa growing at the same time. MODIS can be used in operational mapping since it includes 7 spectral channels, primarily for use in land mapping applications, large coverage, high revisit frequency, small data volume, and no cost since February 2000. These characteristics are good for regional-level mapping applications. However, the spatial resolution is relatively coarse (250 m for the first two channels and 500 m for other 5 channels). The coarse resolution causes mixed pixel problems and lower classification accuracy. ScanSAR data from the Radarsat-2 observation, is a good data source for obtaining high-resolution spatial information (50 m) for regional mapping at a frequent repeat rate due to its all-weather and day–night collection capability, its low cost for Canadian Government-related projects, and the large geographic coverage (300 km × 300 km). However, a single SAR image produces low separability between different land use activities including grassland versus alfalfa. The objective of this study is to investigate an image fusion technique to improve grassland and alfalfa differentiation by combining MODIS and ScanSAR imagery. Specifically, we aimed to answer the following questions:

  • (1)

    Does the incorporation of radar information in the classification process between alfalfa and grassland improve accuracy?

  • (2)

    What kind of radar/optical data combination(s) is/are more suitable to provide this information?

Section snippets

Study area

A pilot study area was selected in Southern Saskatchewan (Fig. 1). The geographic coverage of this area is about 211 km × 236 km. The study area is primarily semiarid and land use is dominantly cereal production, with pasture, forage, oilseeds, pulse production and some conservation parks.

Data sets

Two early-season MODIS and ScanSAR data sets were acquired; the MODIS on June 2, 2009 and the ScanSAR narrow mode on June 20, 2009. The early season data selection was mainly to avoid other spectral confusion

ScanSAR ortho-rectification process

ScanSAR images were ortho-rectified with the DEM data (1:250,000 scale) downloaded from GeoBase (http://www.geobase.ca/). The radar-specific model in PCI OrthoEngine was used in the ortho-rectification process. As the terrain relief in the study area is not too high, the final image residual errors in X and Y direction are both less than 1 pixel. The ortho-rectified image was resampled to 50 m by using the nearest neighbour resampling method.

Speckle removal

The original ScanSAR image is contaminated with

Fusion

The original HV (Fig. 3(a)) was selected to fuse with MODIS (bands 6, 2, and 1, Fig. 3(b)) since HV is more sensitive to vertical structures compared with HH mode, with the final fusion result listed in Fig. 3(c). Fig. 3(c) appears similar to Fig. 3(b) from the point of colour, as no serious colour distortion was identified in the fusion result. To clarify the details of those images, the close view of a subset area highlighted in Fig. 3(a) are listed in Fig. 4(a)–(c). Fig. 4(c) shows that the

Conclusion

This study proposed an earth observation based method to identify alfalfa spatial distribution due to its huge biofuel potential in the Prairie Provinces. The challenging parts exist in two aspects: alfalfa has a similar growing season and spectral confusion to other crops and cloud-free remote sensing data is not easy to acquire during crop growing season. The early season remote sensing imagery was acquired to avoid spectral confusion with other annual crops. This study proposed to combine

Acknowledgements

The authors would like to thank Canadian Space Agency for providing the Radarsat-2 data through the Climate Change Geoscience Program of the Earth Sciences Sector, Natural Resources Canada. Two anonymous reviewers’ critical comments are greatly appreciated to improve this manuscript significantly. Financial support from Canadian Space Agency, through the Government Related Initiatives Program, and from York University contract faculty research grants fund (CUPE 3903) is acknowledged.

References (32)

  • J. Campbell

    Mapping grasslands for biofuel potential

    USGS Newsroom

    (2012)
  • R.G. Congalton et al.

    Assessing the Accuracy of Remotely Sensed Data: Principles and Practices

    (2008)
  • G.M. Foody

    Classification accuracy assessment

    IEEE Geoscience and Remote Sensing Society Newsletter

    (2011)
  • G. Hong et al.

    Crop type identification potential of Radarsat-2 and MODIS images in prairie area

    Canadian Journal of Remote Sensing

    (2011)
  • G. Hong et al.

    Fusion of MODIS and Radarsat data for crop type classification – an initial study

  • G. Hong et al.

    A wavelet and IHS integration method to fuse high resolution SAR with moderate resolution multispectral images

    Photogrammetric Engineering and Remote Sensing

    (2009)
  • Cited by (55)

    • Contribution of multispectral (optical and radar) satellite images to the classification of agricultural surfaces

      2020, International Journal of Applied Earth Observation and Geoinformation
      Citation Excerpt :

      Numerous studies have shown the ability of optical imagery to detect the type and state of a crop (Joshi et al., 2016) and the ability of radar images to follow surface states and stages of development (Hadria et al., 2009; McNairn et al., 2014). Given the complementary nature of optical and radar signals (notably their different penetration capacities), they have been used in synergy to improve the ways agricultural surfaces are monitored, including the accuracy of mapping and of biophysical parameter estimations (Amarsaikhan and Douglas, 2004; Blaes et al., 2005; McNairn et al., 2009a; Fisette et al., 2013; Hong et al., 2014; Inglada et al., 2016). A variety of methods for detecting changes in land use by classifying multi-source, multi-temporal data have been proposed and evaluated in recent years (Lu and Weng, 2007; Mountrakis et al., 2011; Srivastava et al., 2012; Hussain et al., 2013; Tewkesbury et al., 2015).

    View all citing articles on Scopus
    View full text