Rotation-Invariant Feature Learning in VHR Optical Remote Sensing Images via Nested Siamese Structure With Double Center Loss | IEEE Journals & Magazine | IEEE Xplore

Rotation-Invariant Feature Learning in VHR Optical Remote Sensing Images via Nested Siamese Structure With Double Center Loss


Abstract:

Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though mult...Show More

Abstract:

Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 59, Issue: 4, April 2021)
Page(s): 3326 - 3337
Date of Publication: 18 September 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.