Abstract:
The utilization of multimodal data from multisensors (e.g., hyperspectral and light detection ranging (LiDAR) data) to classify ground objects has been an important topic...Show MoreMetadata
Abstract:
The utilization of multimodal data from multisensors (e.g., hyperspectral and light detection ranging (LiDAR) data) to classify ground objects has been an important topic in remote sensing interpretation. However, complex background leads to the difficulty in extracting context relationships; at the same time, redundancy and noise among multimodal data bring great challenges to accurate classification. In this letter, we propose a novel spatial context and de-redundant fusion network (SCDNet) to fuse hyperspectral and LiDAR data for land cover classification. Specifically, a multiscale attention fusion module (MSAF) is developed in the feature extraction stage, which adaptively fuses global and local information of different scales to obtain a more accurate spatial context. In the feature fusion stage, a fusion module based on gated mechanism is proposed, which can remove the redundant information of multimode data and obtain discriminative fusion features. We design a series of comparisons and ablation experiments on the Houston2013 dataset and Trento dataset, and the results demonstrate the effectiveness of the proposed method.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 20)