Abstract:
Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complem...Show MoreMetadata
Abstract:
Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complementary features due to the modality inconsistency and redundancy. Inconsistency can lead to semantic-spatial misalignment, while redundancy introduces uncertainty that is specific to each modality. To overcome these challenges and enhance complementarity exploration and exploitation, this article proposes a dual-dynamic cross-modal interaction network (DDCINet), a novel framework comprising two key modules: a dual-dynamic cross-modal interaction (DDCI) module and a dynamic feature fusion (DFF) module. The DDCI module simultaneously addresses both modality inconsistency and redundancy by employing a collaborative design of channel-gated spatial cross-attention (CSCA) and cross-modal dynamic filters (CMDFs) on evenly segmented multimodal features. The CSCA component enhances the semantic-spatial correlation between modalities by identifying the most relevant channel-spatial features through cross-attention, addressing modality inconsistency. In parallel, the CMDF component achieves cross-modal context interaction through static convolution and further generates dynamic spatial-variant kernels to filter out irrelevant information between modalities, addressing modality redundancy. Following the improved feature extraction, the DFF module dynamically adjusts interchannel dependencies guided by modal-specific global context to fuse features, achieving better complementarity exploitation. Extensive experiments conducted on three MM-RSOD datasets confirm the superiority and generalizability of the DDCINet framework. Notably, our DDCINet, based on the RoI Transformer benchmark and ResNet50 backbone, achieves 78.4% mAP50 on the DroneVehicle test set and outperforms state-of-the-art (SOTA) methods by large margins.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 63)