Abstract:
Despite the advantage of exploiting interimage information by performing joint processing of images for co-saliency, co-segmentation, or co-localization, it introduces a ...Show MoreMetadata
Abstract:
Despite the advantage of exploiting interimage information by performing joint processing of images for co-saliency, co-segmentation, or co-localization, it introduces a few drawbacks: 1) its necessity in scenarios where the joint processing might not perform better than individual image processing; 2) increased complexity over individual image processing; and 3) complex parameter tuning. In this paper, we propose a simple cosaliency estimation method where we fuse saliency maps of different images using the dense correspondence technique. More important, the co-saliency estimation is guided by our proposed quality measurement that helps decide whether the saliency fusion really improves the quality of the saliency map or not. Our basic idea for developing the quality metric is that a high-quality saliency map should have well-separated foreground and background, as well as a concentrated foreground like ground-truths. Extensive experiments on several benchmark datasets including the large-scale dataset, ImageNet, for the applications of foreground co-segmentation and co-localization show that our proposed framework is able to achieve very competitive results.
Published in: IEEE Transactions on Multimedia ( Volume: 20, Issue: 9, September 2018)