Abstract
We propose a novel algorithm to segment objects from the existed segmentation results of the co-segmentation algorithms [1]. Previous co-segmentation algorithms work well when the main regions of the images contain only the target objects; however, their performances degenerate significantly when multi-category objects appear in the images. In contrast, our method adopts mask transformation from multiple images and discriminatively enhancement from multiple object categories, which can effectively ensure a good performance in both scenarios. We propose to use sift-flow [2] between pre-segmented source images and target image, and transform the source images’ segmentation mask to fit the target testing image by the flow vectors. Then we use all the transformed masks to vote the testing image mask and get the initial segmentation results. We also propose to use the ratio between the target category and the other categories to eliminate the side effects from other objects that might appeared in the initial segmentation. We conduct our experiment on internet images collected by Rubinstein .etc [1]. We also do additional experiment to study the multi-object conjunction cases. Our algorithm is effective in computation complexity and able to achieve a better performance than the state-of-the-art algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1939–1946. IEEE (2013)
Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(5), 978–994 (2011)
Vicente, S., Rother, C., Kolmogorov, V.: Object cosegmentation. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2217–2224 (2011)
Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 409–416. IEEE (2011)
Rubinstein, M., Liu, C., Freeman, W.T.: Annotation propagation in large image databases via dense image correspondence. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 85–99. Springer, Heidelberg (2012)
Yang, Y., Liang, Q., Niu, L., Zhang, Q.: Belief propagation stereo matching algorithm using ground control points. In: Fifth International Conference on Graphic and Image Processing. International Society for Optics and Photonics, pp. 90690W–90690W (2014)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)
Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG) 23, 309–314 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Li, H., Yao, H., Sun, X. (2014). Using Label Propagation to Get Confidence Map for Segmentation. In: Ooi, W.T., Snoek, C.G.M., Tan, H.K., Ho, CK., Huet, B., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2014. PCM 2014. Lecture Notes in Computer Science, vol 8879. Springer, Cham. https://doi.org/10.1007/978-3-319-13168-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-13168-9_9
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13167-2
Online ISBN: 978-3-319-13168-9
eBook Packages: Computer ScienceComputer Science (R0)