Abstract:
Deep learning technology has rapidly advanced in the interpretation of polarimetric synthetic aperture radar (PolSAR) images in recent years. However, deep learning metho...Show MoreMetadata
Abstract:
Deep learning technology has rapidly advanced in the interpretation of polarimetric synthetic aperture radar (PolSAR) images in recent years. However, deep learning methods in PolSAR image interpretation primarily rely on a significant volume of labeled data to make precise predictions, while disregarding the potential physical features of PolSAR. To solve the problem, a deep fusion network is proposed in this letter, which can effectively utilize the complementary characteristics between amplitude and physical features of PolSAR images to enhance the interpretability of the network and improve PolSAR image classification performance. In addition, an improved feature pyramid network (IFPN) and a learnable feature fusion module (LFFM) were proposed to autonomously learn the required fused feature information and avoid the process of feature selection. Finally, the spectral features of PolSAR data are fused to further enhance the discriminability of the features extracted by the proposed network and improve the classification accuracy of the proposed method. In addition, to verify the effectiveness of the proposed method, two real PolSAR datasets were used. The experimental results demonstrate that the proposed method achieves higher accuracy, even with a limited number of labeled samples.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 21)