Loading [a11y]/accessibility-menu.js
Image Co-Saliency Detection and Instance Co-Segmentation Using Attention Graph Clustering Based Graph Convolutional Network | IEEE Journals & Magazine | IEEE Xplore

Image Co-Saliency Detection and Instance Co-Segmentation Using Attention Graph Clustering Based Graph Convolutional Network


Abstract:

Co-Saliency Detection (CSD) is to explore the concurrent patterns and salient objects from a group of relevant images, while Instance Co-Segmentation (ICS) aims to identi...Show More

Abstract:

Co-Saliency Detection (CSD) is to explore the concurrent patterns and salient objects from a group of relevant images, while Instance Co-Segmentation (ICS) aims to identify and segment out all of these co-salient instances, generating corresponding mask for each instance. To simultaneously tackle these two tasks, we present a novel adaptive graph convolutional network with attention graph clustering (GCAGC) for CSD and ICS, termed as GCAGC-CSD and GCAGC-ICS, respectively. The GCAGC-CSD contains three key model designs: first, we develop a graph convolutional network architecture to extract multi-scale representations to characterize the intra- and inter-image consistency. Second, we propose an attention graph clustering algorithm to distinguish the salient foreground objects from common areas in an unsupervised manner. Third, we present a unified framework with encoder-decoder structure to jointly train and optimize the graph convolutional network, attention graph cluster, and CSD decoder in an end-to-end fashion. Afterwards, we design a salient instance segmentation network for GCAGC-ICS, and combine the outputs of GCAGC-CSD and the instance segmentation branch to obtain instance-aware co-segmentation masks. The proposed GCAGC-CSD and GCAGC-ICS are extensively evaluated on four CSD benchmark datasets (iCoseg, Cosal2015, COCO-SEG and CoSOD3k) and five ICS benchmark datasets (CoSOD3k, COCO-NONVOC, COCO-VOC, VOC12 and SOC), and achieve superior performance over state-of-the-arts on both tasks.
Published in: IEEE Transactions on Multimedia ( Volume: 24)
Page(s): 492 - 505
Date of Publication: 27 January 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.