Skip to main content

Advertisement

Log in

Automatic inpainting by removing fence-like structures in RGBD images

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Recent inpainting techniques usually require human interactions which are labor intensive and dependent on the user experiences. In this paper, we introduce an automatic inpainting technique to remove undesired fence-like structures from images. Specifically, the proposed technique works on the RGBD images which have recently become cheaper and easier to obtain using the Microsoft Kinect. The basic idea is to segment and remove the undesired fence-like structures by using both depth and color information, and then adapt an existing inpainting algorithm to fill the holes resulting from the structure removal. We found that it is difficult to achieve a satisfactory segmentation of such structures by only using the depth channel. In this paper, we use the depth information to help identify a set of foreground and background strokes, with which we apply a graph-cut algorithm on the color channels to obtain a more accurate segmentation for inpainting. We demonstrate the effectiveness of the proposed technique by experiments on a set of Kinect images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. https://sites.google.com/site/qinzoucn/documents

  2. http://www.microsoft.com/en-us/kinectforwindowsdev/downloads.aspx

  3. The size of the patch is (2\(\cdot sz\)+1)\(\times \)(2\(\cdot sz\)+1)

References

  1. Ashikhmin, M.: Synthesizing natural textures. In: ACM Symp. Interactive 3D Graphics (2001)

  2. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.: PatchMatch: a randomized correspondence algorithm for structural image editing. In: SIGGRAPH, pp. 1–10 (2009)

  3. Bertalmío, M.: Strong-continuation, contrast-invariant inpainting with a third-order optimal pde. IEEE TIP 15(7), 1934–1938 (2006)

    Google Scholar 

  4. Bertalmío, M., Bertozzi, A., Sapiro, G.: Navier–stokes, fluid dynamics, and image and video inpainting. In: CVPR (2001)

  5. Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpaintng. In: SIGGRAPH (2000)

  6. Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In: ICCV (2001)

  7. Bugeau, A., Bertalmío, M., Caselles, V., Sapiro, G.: A comprehensive framework for image inpainting. IEEE TIP 19(10), 2634–2645 (2010)

    Google Scholar 

  8. Cai, J., Chan, R., Shen, Z.: A framelet-based image inpainting algorithm. Appl. Comput. Harmon. Anal. 24(2), 131–149 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Chan, T.F., Shen, J.: Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Rep. 12(4), 436–449 (2001)

    Article  Google Scholar 

  10. Cho, T., Butman, M., Avidan, S., Freeman, W.: The patch transform and its applications to image editing. In: CVPR (2008)

  11. Criminisi, A., Pérez, P., Toyama, K.: Object removal by exemplar-based inpainting. In: CVPR (2003)

  12. Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE TIP 13(9), 1–13 (2004)

    Google Scholar 

  13. Demanet, L., Song, B., Chan, T.: Image inpainting by corresponding dence maps: a deterministic approach. In: UCLA CAM R, Tech. Rep. (2003)

  14. Dong, B., Ji, H., Li, J., Shen, Z., Xu, Y.: Wavelet frame based blind image inpainting. Appl. Comput. Harmon. Anal. 32, 268–279 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. In: SIGGRAPH, pp. 303–312 (2003)

  16. Efros, A., Freeman, W.: Image quilting for texture synthesis and transfer. In: SIGGRAPH, pp. 341–346 (2001)

  17. Efros, A., Leung, T.: Texture synthesis by non-parametric sampling. In: ICCV, pp. 1033–1038 (1999)

  18. Felzenszwalb, P.F.: Representation and detection of deformable shapes. IEEE TPAMI 27(2), 208–220 (2005)

    Article  Google Scholar 

  19. Gulshan, V., Rother, C., Criminisi, A., Blake, A., Zisserman, A.: Geodesic star convexity for interactive image segmentation. In: CVPR, pp. 1–8 (2010)

  20. He, K., Sun, J., Tang, X.: Fast matting using large kernel matting laplacian matrices. In: CVPR (2010)

  21. He, L., Bleyer, M., Gelautz, M.: Object removal by depth-guided inpainting. In: Austrian Association for Pattern Recognition Workshop (AAPRW’11) (2011)

  22. Hertzmann, A., Jacobs, C., Oliver, N., Curless, B., Salesin, D.: Image analogies. In: SIGGRAPH, pp. 327–340 (2001)

  23. Hilditch, C.J.: Linear skeletons from square cupboards. Mach. Intell. 4, 403–420 (1969)

    Google Scholar 

  24. Komodakis, N., Tziritas, G.: Image completoin using global optimization. In: CVPR (2006)

  25. Komodakis, N., Tziritas, G.: Image completion using efficient belief propagation via priority scheduling and dynamic pruning. IEEE TIP 16(11), 2649–2661 (2007)

    MathSciNet  Google Scholar 

  26. Kopf, J., Fu, C., Cohen-Or, D., Deussen, O., Lischinski, D., Wong, T.: Solid texture synthesis from 2d exemplars. In: SIGGRAPH, pp. 1–9 (2007)

  27. Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures: image and video synthesis using graph cuts. In: SIGGRAPH, pp. 277–286 (2003)

  28. Lefebvre, S., Hoppe, H.: Parallel controllable texture synthesis. In: SIGGRAPH, pp. 1–8 (2005)

  29. Levin, A., Lischinski, D., Weiss, Y.: A closed form solution to natural image matting. In: CVPR (2006)

  30. Liang, L., Liu, C., Xu, Y., Guo, B., Shum, H.: Real-time texture synthesis by patch-based sampling. In: SIGGRAPH (2001)

  31. Liu, Y., Belkina, T., Hays, J., Lublinerman, R.: Image de-fencing. In: CVPR (2008)

  32. Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE TIP 17(1), 53–69 (2008)

    MathSciNet  Google Scholar 

  33. Nock, R., Nielsen, F.: Statistical region merging. IEEE TPAMI 26(1), 1452–1458 (2004)

    Article  Google Scholar 

  34. Park, M., Brocklehurst, K., Collins, R.T., Liu, Y.: Image de-fencing revisited. In: ACCV, pp. 422–434 (2010)

  35. Qi, F., Han, J., Wang, P., Shi, G., Li, F.: Structure guided fusion for depth map inpainting. Pattern Recognit. Lett. 34, 70–76 (2013)

    Article  Google Scholar 

  36. Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)

    Article  Google Scholar 

  37. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR (2011)

  38. Simakov, D., Caspi, Y., Shechtman, E., Irani, M.: Summarizing visual data using bidirectional similarity. In: CVPR, pp. 1–8 (2008)

  39. Sun, J., Kang, S.B., Xu, Z.B., Tang, X., Shum, H.Y.: Flash cut: foreground extraction with flash and no-flash image pairs. In: CVPR (2007)

  40. Sun, J., Yuan, L., Jia, J., Shum, H.: Image completion with structure propagation. In: SIGGRAPH, pp. 868–875 (2005)

  41. Temlyakov, A., Munsell, B., Waggoner, J., Wang, S.: Two perceptually motivated strategies for shape classification. In: CVPR (2010)

  42. Tschumperlé, D.: Fast anisotropic smoothing of multi-valued images using curvature-preserving PDE’s. IJCV 68(1), 65–82 (2006)

    Article  Google Scholar 

  43. Tschumperle, D., Deriche, R.: Vector-valued image regularization with PDEs: a common framework for different applications. IEEE TPAMI 27(4), 506–517 (2005)

    Article  Google Scholar 

  44. Wang, L., Jin, H., Yang, R., Gong, M.: Stereoscopic inpainting: joint color and depth completion from stereo images. In: CVPR (2008)

  45. Wang, S.S., Tsai, S.L.: Automatic image authentication and recovery using fractal code embedding and image inpainting. Pattern Recognit. 41, 701–712 (2008)

  46. Wei, L., Levoy, M.: Fast texture synthesis using tree-structured vector quantization. In: SIGGRAPH (2000)

  47. Wexler, Y., Shechtman, E., Irani, M.: Space–time video completion. In: CVPR, pp. 1–8 (2004)

  48. Yamashita, A., Matsui, A., Kaneko, T.: Fence removal from multi-focus images. In: International Conference on Pattern Recognition (ICPR’10) (2010)

  49. Yamashita, A., Tsurumi, F., Kaneko, T., Asama, H.: Automatic removal of foreground occluder from multi-focus images. In: IEEE International Conference on Robotics and Automation (ICRA’12) (2012)

Download references

Acknowledgments

The authors would like to thank Mr. Shufan Liu and Mr. Liang Zhang for help in collecting the Kinect data. This research was supported, in part, by the China Postdoctoral Science Foundation funded project (2012M521472), National Natural Science Foundation of China (61301277 and 41371431), Hubei Provincial Natural Science Foundation (2013CFB299), National Basic Research Program of China (2012CB725303), and the fund from AFOSR FA9550-11-1-0327, NSF IIS-0951754, NSF IIS-1017199, and ARL W911NF-10-2-0060.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qin Zou.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zou, Q., Cao, Y., Li, Q. et al. Automatic inpainting by removing fence-like structures in RGBD images. Machine Vision and Applications 25, 1841–1858 (2014). https://doi.org/10.1007/s00138-014-0637-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-014-0637-y

Keywords

Navigation