Loading [a11y]/accessibility-menu.js
New Contour Cue-Based Hybrid Sparse Learning for Salient Object Detection | IEEE Journals & Magazine | IEEE Xplore
Scheduled Maintenance: On Monday, 27 January, the IEEE Xplore Author Profile management portal will undergo scheduled maintenance from 9:00-11:00 AM ET (1400-1600 UTC). During this time, access to the portal will be unavailable. We apologize for any inconvenience.

New Contour Cue-Based Hybrid Sparse Learning for Salient Object Detection


Abstract:

Saliency detection is a hot topic in recent years and much efforts have been made to address it from different perspectives. However, current saliency models cannot meet ...Show More

Abstract:

Saliency detection is a hot topic in recent years and much efforts have been made to address it from different perspectives. However, current saliency models cannot meet the needs for diversified scenes due to their limited generalization capability. To tackle this problem, in this paper, we propose a hybrid saliency model, which can fuse heterogeneous visual cues for robust salient object detection. A new contour cue is first introduced to provide discriminative saliency information for scene description. Its realization is based on a discrete optimization objective and can be solved efficiently with an iterative algorithm. Followed by this, the contour cue is taken as a part of a hybrid sparse learning model, in which cues from different domains can interact and complement with each other for joint saliency fusion. This saliency fusion model is parameter-free and its numerical solution can be obtained using gradient descent methods. Finally, we advance an object proposal-based collaborative filtering strategy to generate high quality saliency maps from the above fusion results. Compared with traditional methods, the proposed saliency model can fuse heterogeneous cues in a unified optimization framework rather than combine them separately. Therefore, it has favorable modeling capability under diversified scenes where the saliency patterns appear quite differently. To verify the effectiveness of the proposed method, we take experiments on four large saliency benchmark datasets and compare it with other 26 state-of-the-art saliency models. Both qualitative and quantitative evaluation results indicate the superiority of our method, especially in challenging situations. Besides, we apply our saliency model to ship detection of radar platforms and promising results are obtained over traditional detectors.
Published in: IEEE Transactions on Cybernetics ( Volume: 51, Issue: 8, August 2021)
Page(s): 4212 - 4226
Date of Publication: 30 December 2019

ISSN Information:

PubMed ID: 31899445

Funding Agency:


References

References is not available for this document.