International Journal of Applied Earth Observation and Geoinformation
Integration of optical and synthetic aperture radar (SAR) images to differentiate grassland and alfalfa in Prairie area
Introduction
Biofuels, being advocated as a renewable, cost-effective alternative to petroleum-based liquid fuels, require development of a cellulosic-based biofuels industry (Campbell, 2012). Perennials rich in fibre are generally a suitable feedstock for bioenergy, while those abundant in foliage are efficient as a feed for livestock. Alfalfa (Medicago sativa L.) is proposed as a biofuel feedstock, since its stems could be processed to produce energy or fuel and the leaves used as a livestock feed (McCaslin and Miller, 2007). In order to understand and develop alfalfa biofuel potential the spatial distribution needs to be determined more accurately than currently available spatial information extraction methods.
SAR is used often in vegetation mapping, due to its independence from solar illumination, and its different imaging principles when compared with the optical image (Buckley, 2004, Smith and Buckley, 2011). The brightness of a SAR image depends on the roughness, geometry, and material contents of the targeted surface and the wavelength of SAR. The grey information from optical image represents the reflectance of solar energy from a target area (Jensen, 2005). Combing microwave and optical sensors can help in discriminating the different classes since they are complementary to each other (Pohl and Van Genderen, 1998). Many studies have combined optical image and microwave image to improve mapping accuracy in agricultural scenarios (Brisco et al., 1989, Schistad-Solberg et al., 1994, Brisco and Brown, 1995, Le Hegarat-Mascle et al., 2000, Ban, 2003, Blaes et al., 2005, Michael et al., 2005, McNairn et al., 2009). SAR and optical imagery can be integrated in different ways to improve the data and information content during image processing for information extraction. Image fusion is a technique which can combine the optical and SAR sensor data, prior to information extraction. The purpose of radar and optical image fusion is mainly for feature enhancement and confusion reduction (van der Sanden and Thomas, 2004, Schistad-Solberg et al., 1994). The roles of image fusion can be reflected in three ways:
- (1)
Take maximum advantage of the merits of each single sensor. As every sensor has disadvantages and advantages there is a potential synergy in integrating data from different sensors to take advantage of those sensors’ advantages without significantly distorting the desirable characteristics of every sensor (Lewis et al., 1998, Amarsaikhan and Douglas, 2004).
- (2)
Reduce information redundancy caused by multisource data (Schistad-Solberg et al., 1994). Images acquired over the same geographic area by different sensors have two possibilities: partially redundancy, since they cover the same geographic area; and partially complementarity since sensors cover different spectral ranges, which is very apparent when comparing optical and microwave images. The aim of image fusion is not only to use their complementarities to reduce confusion by getting a more complete description of land cover type features, but also to use multisource data redundancy to reduce imprecision and classification errors (Le Hegarat-Mascle et al., 2000), thus improving classification results (Tso and Mather, 2001).
- (3)
Un-mix mixed pixels. Generally, the spatial resolution of the selected SAR image is higher than the optical multispectral image. The fused image can un-mix the mixed pixels in the lower spatial resolution multispectral image to a certain level (Van der Meer, 1997, Robinson et al., 2000, Bachmann and Habermeyer, 2003, Hong et al., 2011).
This study collected early-season remote sensing images to differentiate perennials from annuals. An early-season MODIS and ScanSAR narrow mode image were selected in this study for regional level grassland and alfalfa differentiation in the Prairie area, due to grassland and alfalfa growing at the same time. MODIS can be used in operational mapping since it includes 7 spectral channels, primarily for use in land mapping applications, large coverage, high revisit frequency, small data volume, and no cost since February 2000. These characteristics are good for regional-level mapping applications. However, the spatial resolution is relatively coarse (250 m for the first two channels and 500 m for other 5 channels). The coarse resolution causes mixed pixel problems and lower classification accuracy. ScanSAR data from the Radarsat-2 observation, is a good data source for obtaining high-resolution spatial information (50 m) for regional mapping at a frequent repeat rate due to its all-weather and day–night collection capability, its low cost for Canadian Government-related projects, and the large geographic coverage (300 km × 300 km). However, a single SAR image produces low separability between different land use activities including grassland versus alfalfa. The objective of this study is to investigate an image fusion technique to improve grassland and alfalfa differentiation by combining MODIS and ScanSAR imagery. Specifically, we aimed to answer the following questions:
- (1)
Does the incorporation of radar information in the classification process between alfalfa and grassland improve accuracy?
- (2)
What kind of radar/optical data combination(s) is/are more suitable to provide this information?
Section snippets
Study area
A pilot study area was selected in Southern Saskatchewan (Fig. 1). The geographic coverage of this area is about 211 km × 236 km. The study area is primarily semiarid and land use is dominantly cereal production, with pasture, forage, oilseeds, pulse production and some conservation parks.
Data sets
Two early-season MODIS and ScanSAR data sets were acquired; the MODIS on June 2, 2009 and the ScanSAR narrow mode on June 20, 2009. The early season data selection was mainly to avoid other spectral confusion
ScanSAR ortho-rectification process
ScanSAR images were ortho-rectified with the DEM data (1:250,000 scale) downloaded from GeoBase (http://www.geobase.ca/). The radar-specific model in PCI OrthoEngine was used in the ortho-rectification process. As the terrain relief in the study area is not too high, the final image residual errors in X and Y direction are both less than 1 pixel. The ortho-rectified image was resampled to 50 m by using the nearest neighbour resampling method.
Speckle removal
The original ScanSAR image is contaminated with
Fusion
The original HV (Fig. 3(a)) was selected to fuse with MODIS (bands 6, 2, and 1, Fig. 3(b)) since HV is more sensitive to vertical structures compared with HH mode, with the final fusion result listed in Fig. 3(c). Fig. 3(c) appears similar to Fig. 3(b) from the point of colour, as no serious colour distortion was identified in the fusion result. To clarify the details of those images, the close view of a subset area highlighted in Fig. 3(a) are listed in Fig. 4(a)–(c). Fig. 4(c) shows that the
Conclusion
This study proposed an earth observation based method to identify alfalfa spatial distribution due to its huge biofuel potential in the Prairie Provinces. The challenging parts exist in two aspects: alfalfa has a similar growing season and spectral confusion to other crops and cloud-free remote sensing data is not easy to acquire during crop growing season. The early season remote sensing imagery was acquired to avoid spectral confusion with other annual crops. This study proposed to combine
Acknowledgements
The authors would like to thank Canadian Space Agency for providing the Radarsat-2 data through the Climate Change Geoscience Program of the Earth Sciences Sector, Natural Resources Canada. Two anonymous reviewers’ critical comments are greatly appreciated to improve this manuscript significantly. Financial support from Canadian Space Agency, through the Government Related Initiatives Program, and from York University contract faculty research grants fund (CUPE 3903) is acknowledged.
References (32)
- et al.
Efficiency of crop identification based on optical and SAR image time series
Remote Sensing of Environment
(2005) A review of assessing the accuracy of classifications of remotely sensed data
Remote Sensing of Environment
(1991)- et al.
Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories
ISPRS Journal of Photogrammetry and Remote Sensing
(2009) - et al.
Evaluation of two applications of spectral mixing models to image fusion
Remote Sensing Environment
(2000) - et al.
Data fusion and multisource image classification
International Journal of Remote Sensing
(2004) Synergy of multitemporal ERS-1 SAR and landsat TM data for classification of agricultural crops
Canadian Journal of Remote Sensing
(2003)- et al.
Evaluation of image fusion techniques for large-scale mapping of non-green vegetation
- et al.
Multi-date SAR/TM synergism for crop classification in Western Canada
Photogrammetric Engineering and Remote Sensing
(1995) - et al.
Early season crop discrimination with combined SAR and TM data
Canadian Journal of Remote Sensing
(1989) Enhanced classification of prairie landscapes using simulated RADARSAT-2 imagery
Canadian Journal of Remote Sensing
(2004)
Mapping grasslands for biofuel potential
USGS Newsroom
Assessing the Accuracy of Remotely Sensed Data: Principles and Practices
Classification accuracy assessment
IEEE Geoscience and Remote Sensing Society Newsletter
Crop type identification potential of Radarsat-2 and MODIS images in prairie area
Canadian Journal of Remote Sensing
Fusion of MODIS and Radarsat data for crop type classification – an initial study
A wavelet and IHS integration method to fuse high resolution SAR with moderate resolution multispectral images
Photogrammetric Engineering and Remote Sensing
Cited by (55)
On the automatic quality assessment of annotated sample data for object extraction from remote sensing imagery
2023, ISPRS Journal of Photogrammetry and Remote SensingEvaluating NISAR's cropland mapping algorithm over the conterminous United States using Sentinel-1 data
2021, Remote Sensing of EnvironmentContribution of multispectral (optical and radar) satellite images to the classification of agricultural surfaces
2020, International Journal of Applied Earth Observation and GeoinformationCitation Excerpt :Numerous studies have shown the ability of optical imagery to detect the type and state of a crop (Joshi et al., 2016) and the ability of radar images to follow surface states and stages of development (Hadria et al., 2009; McNairn et al., 2014). Given the complementary nature of optical and radar signals (notably their different penetration capacities), they have been used in synergy to improve the ways agricultural surfaces are monitored, including the accuracy of mapping and of biophysical parameter estimations (Amarsaikhan and Douglas, 2004; Blaes et al., 2005; McNairn et al., 2009a; Fisette et al., 2013; Hong et al., 2014; Inglada et al., 2016). A variety of methods for detecting changes in land use by classifying multi-source, multi-temporal data have been proposed and evaluated in recent years (Lu and Weng, 2007; Mountrakis et al., 2011; Srivastava et al., 2012; Hussain et al., 2013; Tewkesbury et al., 2015).
Remote sensing of grasslands in the South American Pampas (scientometrics analysis)
2023, Land Degradation and Development