Skip to main content
Log in

An efficient stereo matching based on fragment matching

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

We propose a stereo matching method based on image fragments. Unlike traditional pixel-based stereos matching methods, we use edge information in the reference image to divide it into small fragments, and we then use the segments to find the best matching fragments in another reference image from the horizontal and vertical directions. We obtain two disparity maps, and using the match confidence value for each disparity map, we can produce a more accurate disparity map. Next, we calculate the exact disparity value for each pixel within the fragment. Finally, the disparity map is filled and smoothed to obtain the final disparity result. Experiments demonstrated that the proposed method has low computation complexity, high matching accuracy, and the disparity of object edge is clear, and it achieved good performance with the Middlebury and KITTI benchmark.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Liu, Ce., Yuen, J., Torralba, A.: SIFT flow: dense correspondence across scenes and its applications. IEEE Trans. PAMI 33(5), 978–994 (2011)

    Article  Google Scholar 

  2. Scharstein, D., Szeliski, R.A.: Taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In: Stereo and Multi-Baseline Vision, IEEE, pp. 131–140 (2001)

  3. Hirschmüller, H., Scharstein, D.: Evaluation of stereo matching costs on images with radiometric differences. IEEE Trans. Pattern Anal. Mach. Intell. 31(9), 1582–1599 (2009)

    Article  Google Scholar 

  4. Rhemann, C., Hosni, A., Bleyer, M., Rother, C., Gelautz, M.: Fast cost-volume filtering for visual correspondence and beyond. In: Proceedings of CVPR, pp. 3017–3024 (2011)

  5. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Proceedings of ICCV, pp. 839–846 (1998)

  6. Fu, L., Peng, G., Song, W.: Histogram-based cost aggregation strategy with joint bilateral filtering for stereo matching. IET Comput. Vis. 10(3), 173–181 (2015)

    Article  Google Scholar 

  7. Yang, Q.: Hardware-efficient bilateral filtering for stereo matching. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 1026–32 (2014)

    Article  Google Scholar 

  8. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–409 (2013)

    Article  Google Scholar 

  9. Zhang, C., Li, Z., Cheng, Y., et al.: MeshStereo: a global stereo model with mesh alignment regularization for view interpolation. In: ICCV, pp. 2057–2065 (2015)

  10. Yang, Q.: A non-local cost aggregation method for stereo matching. In: Proceedings of CVPR, pp. 1402–1409 (June 2012)

  11. Mei, X., Sun, X., Dong, W., Wang, H., Zhang, X.: Segment-tree based cost aggregation for stereo matching. In: Proceedings of CVPR (2013)

  12. Li, L., Zhang, S., Yu, X., et al.: PMSC: PatchMatch-based superpixel cut for accurate stereo matching. IEEE Trans. Circ. Syst. Video Technol. 99, 1 (2016)

    Google Scholar 

  13. Žbontar, J., Lecun, Y.: Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17(1), 2287–2318 (2016)

    MATH  Google Scholar 

  14. Li, Y., et al.: SPM-BP: sped-up PatchMatch belief propagation for continuous MRFs. In: IEEE International Conference on Computer Vision IEEE Computer Society, pp. 4006–4014 (2015)

  15. Mayer, N., Ilg, E., Hausser, P., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: CVPR, pp. 4040–4048 (2016)

  16. Menzeand, A., Geiger, M.: Object scene flow for autonomous vehicles. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2015)

  17. Ttofis, C., Kyrkou, C., Theocharides, T.: A low-cost real-time embedded stereo vision system for accurate disparity estimation based on guided image filtering. IEEE Trans. Comput. 65(9), 1 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Zhan, Y., et al.: Accurate image-guided stereo matching with efficient matching cost and disparity refinement. IEEE Trans. Circuits Syst. Video Technol. 26(9), 1632–1645 (2016)

    Article  Google Scholar 

  19. Scharstein, D., Hirschmüller, H, Kitajima, Y., Krathwohl, G., Nešić, N., Wang, X., Westling, P.: High-resolution stereo datasets with subpixel-accurate ground truth. In: Pattern Recognition, Springer, pp. 31–42 (2014)

  20. He, K., Sun, J., Tang, X.: Guided image filtering. In: European Conference on Computer Vision, Springer, pp. 1–14

  21. Kim, K.R., Kim, C.S.: Adaptive smoothness constraints for efficient stereo matching using texture and edge information. In: IEEE International Conference on Image Processing, IEEE, pp. 3429–3433 (2016)

  22. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–98 (1986)

    Article  Google Scholar 

  23. Yamaguchi, K., Mcallester, D., Urtasun, R.: Efficient joint segmentation, occlusion labeling. Stereo and flow estimation. In: Computer Vision—ECCV, pp. 756–771 (2014)

  24. Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: Proceedings of 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, vol. 1, pp. I-195–I-202 (2003)

  25. Hirschmuller, H., Scharstein, D.: Evaluation of cost functions for stereo matching. In: IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 1–8 (2007)

  26. Psota, E.T., et al.: MAP disparity estimation using hidden markov trees. In: IEEE International Conference on Computer Vision IEEE (2015)

  27. Zhang, K., et al.: Cross-scale cost aggregation for stereo matching. In: Computer Vision and Pattern Recognition IEEE, pp. 1590–1597 (2014)

Download references

Acknowledgements

Grant No. 2018YJSY073 from Graduate Student’s Research and Innovation Fund of Sichuan University and Grant No. 2018RZ0080 from Department of Science and Technology of Sichuan Province (Science and Technology Department of Sichuan Province).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingjiang Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Zhang, J., Zhong, Y. et al. An efficient stereo matching based on fragment matching. Vis Comput 35, 257–269 (2019). https://doi.org/10.1007/s00371-018-1491-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1491-0

Keywords

Navigation