Abstract:
In remote sensing change detection (RSCD) tasks, high-resolution remote sensing images can provide fine image details and complex texture features. Deep learning (DL)-bas...Show MoreMetadata
Abstract:
In remote sensing change detection (RSCD) tasks, high-resolution remote sensing images can provide fine image details and complex texture features. Deep learning (DL)-based RSCD methods have achieved the state-of-the-art (SOTA) performance. However, most of these methods only consider vertical multiscale features when extracting features, while ignoring the problem of loss of contextual features due to scale changes. Hence, in order to overcome the above issues, a deeper multiscale encoding–decoding feature fusion network (DMEDNet) is proposed in this letter. The encoding stage is considered in both horizontal (features of the same scale in the same layer) and vertical (different scale features in different layers) dimensions, allowing the model to fully extract deeper multiscale features. The decoding phase further overcomes the problem of contextual information loss by reusing the obtained same scale features. In addition, a parallel convolutional block attention module (PCBAM) is applied to better focus on the most representative features, thereby improving the detection effect of the model. The proposed method is compared with several other SOTA CD methods on the season-varying CD dataset (CDD). The experimental results show that the proposed method obtains the highest F1 score of 97.4% and exhibits stronger robustness in dealing with complex scenes and small change targets.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 20)