Skip to main content
Log in

Patch-based self-adaptive matting for high-resolution image and video

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

We propose an efficient patch-based self-adaptive matting approach to reduce memory consumption in processing high-resolution image and video. Most existing image matting techniques employ a global optimization over the whole set of image pixels, incurring a prohibitively high memory consumption, especially in high-resolution images. Inspired by “divide-and-conquer,” we divide the images into small patches in a self-adaptive way according to the distribution of unknown pixels and handle the small patches one by one. The alpha mattes in patch level are combined according to the weights. Relationships between patches are also considered by locally linear embedding to maintain consistency through the whole image. We also extend the framework to video matting with considering the temporal coherence of alpha mattes. A sampling method is applied to speed up the operation of video sampling. A multi-frame graph model is also proposed to enhance temporal and spatial consistency which can be solved efficiently by Random Walk. Experimental results show that the proposed method significantly reduces memory consumption while maintaining high-fidelity matting results on the benchmark dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Levin, A., Lischinski, D., Weiss, Y.: A closed form solution to natural image matting. In: CVPR, pp. 61–68 (2006)

  2. Rhemann, C., Rother, C., Ravacha, A., Sharp, T.: High resolution matting via interactive trimap segmentation. In: CVPR, pp. 1–8 (2008)

  3. http://www.alphamatting.com

  4. http://www.videomatting.com

  5. Shi, Y., Au, O., Pang, J., Tang, K., Sun, W., Zhang, H., Zhu, W., Jia, L.: Color clustering matting. In: ICME, pp. 1–6 (2013)

  6. Shahrian, E., Rajan, D., Price, B., Cohen, S.: Improving image matting using comprehensive sampling sets. In: CVPR, pp. 636–643 (2013)

  7. He, K., Rhemann, C., Rother, C., Tang, X., Sun, J.: A global sampling method for alpha matting. In: CVPR, pp. 2049–2056 (2011)

  8. Karacan, L., Erdem, A., Erdem, E.: Image matting with KL-divergence based sparse sampling. In: ICCV, pp. 424–432 (2015)

  9. Wang, J., Cohen, M.: Optimized color sampling for robust matting. In: CVPR, pp. 1–8 (2007)

  10. Johnson, J., Rajan, D., Cholakkal, H.: Sparse codes as alpha matte. In: BMVC, vol. 1(3) (2014)

  11. Shahrian, E., Rajan, D.,: Weighted color and texture sample selection for image matting. In: CVPR, pp. 718–725 (2012)

  12. Chen, Q., Li, D., Tang, C.: Knn matting. In: CVPR, pp. 2175–88 (2013)

  13. He, K., Sun, J., Tang, X.: Fast matting using large kernel matting laplacian matrices. In: CVPR, pp. 2165–2172 (2010)

  14. Zheng, Y., Kambhamettu, C.: Learning based digital matting. In: ICCV, pp. 889–896 (2009)

  15. Lee, P., Wu, Y.: Nonlocal matting. In: CVPR, pp. 2193–2200 (2011)

  16. Aksoy, Y., Aydin, T.O., Pollefeys, M.: Designing effective inter-pixel information flow for natural image matting. In: CVPR (2017)

  17. Tang, Z., Miao, Z., Wan, Y., Zhang, D.: Video matting via opacity propagation. Vis. Comput. 28(1), 47–61 (2012)

    Article  Google Scholar 

  18. Chen, X., Zou, D., Zhou, S.Z., Zhao, Q., Tan, P.: Image matting with local and nonlocal smooth priors. In: CVPR, pp. 1902–1907 (2013)

  19. Wang, J., Cohen, M.: Image and video matting: a survey. Found. Trends Comput. Graph. Vis. 3(2), 97–175 (2007)

    Article  Google Scholar 

  20. Grady, L., Schiwietz, T., Aharon, S., Westermann, R.: Random walks for interactive alpha-matting. In: VIIP, pp. 423–429 (2005)

  21. He, B., Wang, G., Shi, C., Yin, X., Liu, B., Lin, X.: Iterative transductive learning for alpha matting. In: ICIP, pp. 4282–4286 (2013)

  22. Chuang, Y., Agarwala, A., Curless, B., Salesin, D., Szeliski, R.: Video matting. In: ACM SIGGRAPH, pp. 243–248(2002)

  23. Li, D., Chen, Q., Tang, C.: Motion-aware knn laplacian for video matting. In: ICCV, pp. 3599–3606 (2013)

  24. Sindeev, M., Anton, K., Carsten, R.: Alpha-flow for video matting. In: ACCV, pp. 438–452 (2012)

  25. Johnson, J., Varnousfaderani, E.S., Cholakkal, H., Rajan, D.: Sparse coding for alpha matting. IEEE Trans. Image Process. 25(7), 3032–3043 (2016)

    Article  MathSciNet  Google Scholar 

  26. Zou, D., Chen, X., Cao, G., Wang, X.: Video matting via sparse and low-rank representation. In: ICCV, pp. 1564–1572 (2015)

  27. Choi, I., Lee, M., Tai, Y.: Video matting using multi-frame nonlocal matting laplacian. In: ECCV, pp. 540–553 (2012)

  28. Cho, D., Tai, Y.W., Kweon, I.: Natural image matting using deep convolutional neural networks. In: ECCV. Springer, pp. 626–643 (2016)

  29. Xu, N., Price, B., Cohen, S., Huang, T.: Deep image matting. In: CVPR (2017)

  30. Cao, G., Li, J., He, Z., Chen, X.: Divide and conquer: a self-adaptive approach for high-resolution image matting. In: International Conference on Virtual Reality and Visualization (ICVRV), Sept, pp. 24–30 (2016)

  31. Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)

    Article  Google Scholar 

  32. Chen, X., Chen, D., Li, J., Cao, X.: Sparse dictionary learning for edit propagation of high-resolution images. In: CVPR, pp. 2854–2861 (2014)

  33. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Mach. Learn. Res. 11(1), 19–60 (2010)

    MathSciNet  MATH  Google Scholar 

  34. Rhemann, C., Rother, J.W.C., Gelautz, M., Kohli, P., Rott, P.: A perceptually motivated online benchmark for image matting. In: CVPR, pp. 1826–1833 (2009)

  35. Erofeev, M., Gitman, Y., Vatolin, D., Fedorov, A., Wang, J.: Perceptually motivated benchmark for video matting. In: BMVC, Sept, pp. 99.1–99.12 (2015)

  36. Lee, S., Yoon, J., Lee, I.: Temporally coherent video matting. Graph. Models 72(3), 25–33 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the reviewers for their help in improving the paper. This work was partially supported by NSFC \( (61532003 \& 61421003)\) and the Lenovo Outstanding Young Scientists Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaowu Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, G., Li, J., Chen, X. et al. Patch-based self-adaptive matting for high-resolution image and video. Vis Comput 35, 133–147 (2019). https://doi.org/10.1007/s00371-017-1424-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-017-1424-3

Keywords

Navigation