Loading [a11y]/accessibility-menu.js
Adaptive Saliency Biased Loss for Object Detection in Aerial Images | IEEE Journals & Magazine | IEEE Xplore

Adaptive Saliency Biased Loss for Object Detection in Aerial Images


Abstract:

Object detection in aerial images remains a challenging problem due to low image resolutions, complex backgrounds, and variations of sizes and orientations of objects in ...Show More

Abstract:

Object detection in aerial images remains a challenging problem due to low image resolutions, complex backgrounds, and variations of sizes and orientations of objects in images. In recent years, several multiscale and rotated box-based deep neural networks (DNNs) have been proposed and have achieved promising results. In this article, a new method for designing loss function, called adaptive saliency biased loss (ASBL), is proposed for training DNN object detectors to achieve improved performance. ASBL can be implemented at the image level, which is called image-based ASBL, or at the anchor level, which is called anchor-based ASBL. The method computes saliency information of input images and anchors generated by DNN object detectors, weights of different training examples, and anchors differently based on their corresponding saliency measurements. It gives complex images and difficult targets more weights during training. In our experiments using two of the largest public benchmark data sets of aerial images, DOTA and NWPU VHR-10, the existing RetinaNet was trained using ASBL to generate an one-stage detector, ASBL-RetinaNet. ASBL-RetinaNet significantly outperformed the original RetinaNet by 3.61 and 12.5 mean average precision (mAP), respectively, on the two data sets. In addition, ASBL-RetinaNet outperformed ten other state-of-the-art object detection methods.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 58, Issue: 10, October 2020)
Page(s): 7154 - 7165
Date of Publication: 25 March 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.