RGB-D Salient Object Detection via Minimum Barrier Distance Transform and Saliency Fusion | IEEE Journals & Magazine | IEEE Xplore

RGB-D Salient Object Detection via Minimum Barrier Distance Transform and Saliency Fusion

Publisher: IEEE

Abstract:

Automatic detection of salient objects in images has gained its popularity in computer vision field for its usage in numerous vision tasks in recent years. Depth informat...View more

Abstract:

Automatic detection of salient objects in images has gained its popularity in computer vision field for its usage in numerous vision tasks in recent years. Depth information plays an important role in the human vision system while it is underutilized in most existing two-dimensional (2-D) saliency detection methods. In this letter, a multistage salient object detection framework via minimum barrier distance transform and saliency fusion based on multilayer cellular automata (MCA) is proposed. First, we independently generate the 3-D spatial prior, depth bias, and RGB-produced and depth-induced saliency maps. Next, the two saliency maps are weighted by depth bias to obtain two initial maps. Then, we adopt a saliency optimization step to generate more precise depth-induced saliency map. Moreover, the initial RGB-produced and the optimized depth-induced maps are further fused with 3-D spatial prior. Finally, we utilize MCA to fuse all saliency maps generated previously and obtain the final saliency result with complete salient object. The proposed method is evaluated on the publicly available benchmark dataset, RGBD1000. Compared to several state-of-the-art 2-D and depth-aware approaches, the experimental results demonstrate the effectiveness and superiority of our method, which can accurately detect the salient objects from RGB-D images, and has the most satisfactory overall performance.
Published in: IEEE Signal Processing Letters ( Volume: 24, Issue: 5, May 2017)
Page(s): 663 - 667
Date of Publication: 27 March 2017

ISSN Information:

Publisher: IEEE

Funding Agency:


References

References is not available for this document.