Abstract:
Semi-supervised video object segmentation is to segment the target objects given the ground truth annotation of the first frame. Previous successful methods mostly rely o...Show MoreNotes: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's "Publication Services and Products Board Operations Manual," IEEE has chosen to exclude this article from distribution. We regret any inconvenience.
Metadata
Abstract:
Semi-supervised video object segmentation is to segment the target objects given the ground truth annotation of the first frame. Previous successful methods mostly rely on online learning or static image pre-train to improve accuracy. However, online learning methods require huge time costs at inference time, thus restrict their practical use. Methods with static image pre-train require heavy data augmentation that is complicated and time-consuming. This paper presents a fast Hierarchical Embedding Guided Network (HEGNet) which is only trained on Video Object Segmentation (VOS) datasets and does not utilize online learning. Our HEGNet integrates propagation-based and matching-based methods. It propagates the predicted mask of the previous frame as a soft cue and extracts hierarchical embedding at both deep and shallow layers to do feature matching. The produced label map of the deep layer is also used to guide the matching of the shallow layer. We evaluated our method on the DAVIS-2016 and DAVIS-2017 validation sets and achieved overall scores of 84.9% and 71.9% respectively. Our method surpasses the methods without online learning and static image pre-train and runs at 0.08 seconds per frame.
Notes: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's "Publication Services and Products Board Operations Manual," IEEE has chosen to exclude this article from distribution. We regret any inconvenience.
Date of Conference: 19-22 September 2021
Date Added to IEEE Xplore: 23 August 2021
ISBN Information: