Skip to main content
Log in

Adaptive Pixel-wise and Block-wise Stereo Matching in Lighting Condition Changes

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

The depth information performs a very important role in a production of three-dimensional (3D) video content. One way to acquire this information is a stereo matching method. The stereo matching method searches for correspondences from a stereo image that has two different viewpoints. Subsequently, it estimates the depth information by calculating a disparity value between two corresponding points. Generally, a relatively accurate result is obtained from the correspondence search when the stereo image is captured under uniform illumination and exposure conditions. However, it is difficult to estimate accurate correspondences if each viewpoint image is captured under varying illumination and exposure conditions. In this paper, we analyze conventional pixel-wise and block-wise stereo matching methods that are robust to lighting condition changes. Subsequently, we propose an adaptive pixel-wise and block-wise stereo matching method based on the analysis result.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8

Similar content being viewed by others

References

  1. Qian, N. (1997). Binocular disparity and the perception of depth. Neuron, 18(3), 359–368.

    Article  Google Scholar 

  2. Kim, S. Y., Cho, J. H., & Koschan, A. (2010). 3D video generation and service based on a TOF depth sensor in MPEG-4 multimedia framework. IEEE Transactions on Consumer Electronics, 56(3), 1730–1738.

    Article  Google Scholar 

  3. Gokturk, S. B., Yalcin, H., & Bamji C. (2004). A time-of-flight depth sensor-system description, issues and solutions. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshop, Washington, pp. 1–9.

  4. Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A. P., & Nunes, U. (2016). High-resolution LIDAR-based depth mapping using bilateral filter. In: Proc. IEEE Conf. on Intelligent Transportation Systems, Rio de Janeiro, pp. 2469–2474.

  5. Izadi, S., Kim, D., Molyneaux, O., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., & Fitzgibbon, A. (2011). KinecFusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proc. 24 thACM User Interface Software and Technology Symposium, Santa Barbara, pp. 559–568.

  6. Lee, S. B., Kwon, S., & Ho, Y. S. (2013). Discontinuity adaptive depth upsampling for 3D video acquisition. Electronics Letters, 49(25), 1612–1614.

    Article  Google Scholar 

  7. Lee, E. K., & Ho, Y. S. (2010). Generation of multi-view video using a fusion camera system for 3D displays. IEEE Transactions on Consumer Electronics, 56(4), 2797–2805.

    Article  Google Scholar 

  8. Park, J., Kim, H., Tai, Y. W., Brown, M. S., & Kweon, I. (2011). High quality depth map upsampling for 3d-tof cameras. In Proc. IEEE Conf. on Computer Vision, Barcelona, pp. 1623–1630.

  9. Sun, J., Zheng, N. N., & Shum, H. Y. (2003). Stereo matching using belief propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(7), 787–800.

    Article  Google Scholar 

  10. Boykov, Y., Veksler, O., & Zabih, R. (1998). Markov random fields with efficient approximations. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Santa Barbara, pp. 648–655.

  11. Zhang, K., Fang, Y., Min, D., Sun, L., Yang, S., & Yan, S. (2014). Cross-scale cost aggregation for stereo matching. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, pp. 1590–1597.

  12. Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1), 7–42.

    Article  Google Scholar 

  13. Li, R., Ham, B., Oh, C., & Sohn, K. (2013). Disparity search range estimation based on dense stereo matching. In Proc. IEEE Conf. on Industrial Electronics and Applications, Melbourne, pp. 753–759.

  14. Min, D., Yea, S., Arican, Z., & Vetro, A. (2010). Disparity search range estimation: enforcing temporal consistency. In Proc. IEEE Conf. on Acoustics Speech and Signal Processing, Dallas, pp. 2366–2369.

  15. Heo, Y. S., Lee, K. M., & Lee, S. U. (2008). Illumination and camera invariant stereo matching. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, pp. 1–8.

  16. Chang, Y. J., & Ho, Y. S. (2017). Pixel-based adaptive normalized cross correlation for illumination invariant stereo matching. Electronic Imaging, San Francisco, pp. 1–6.

  17. Chang, Y. J., & Ho, Y. S. (2016). Robust stereo matching to radiometric variation using binary information of census transformation. In Proc. The Korean Institute of Broadcast and Media Engineers Fall Conference, Seoul, pp. 1–2.

  18. Chang, Y. J., & Ho, Y. S. (2016). Fast cost computation using binary information for illumination invariant stereo matching. IEEE Seoul Section Student Paper Contest, Paper C04.

  19. Finlayson, G. D., & Xu, R. (2003). Illumination and gamma comprehensive normalization in log RGB space. Pattern Recognition Letters, 24(11), 1679–1690.

    Article  Google Scholar 

  20. Tomasi, C., & Manduchi R. (1998). Bilateral filtering for gray and color images. In Proc. IEEE Conf. on Computer Vision, Bombay, pp. 839–846.

  21. Gonzalez, R. C., & Woods, R. E. (2002). Color image processing. In Digital Image Processing, 2nd ed., New Jersey: Prentice Hall, pp. 282–348.

  22. Zabih, R., & Woodfill, J. (1994). Non-parametric local transforms for computing visual correspondence. In: Proc. European Conf. on Computer Vision, Stockholm, Sweden, pp. 151–158.

  23. Van Lint, J. H. (1992). Linear codes. In Introduction to Coding Theory, 2nd ed., Berlin: Springer-Verlag, pp. 31–41.

  24. Rhemann, C., Hosni, A., Bleyer, M., Rother, C., & Gelautz, M. (2011). Fast cost-volume filtering for visual correspondence and beyond. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Colorado Springs, pp, 3017–3024.

Download references

Acknowledgements

This work was supported by the ‘Civil-Military Technology Cooperation Program’ grant funded by the Korea government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yo-Sung Ho.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, YJ., Ho, YS. Adaptive Pixel-wise and Block-wise Stereo Matching in Lighting Condition Changes. J Sign Process Syst 91, 1305–1313 (2019). https://doi.org/10.1007/s11265-019-1442-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-019-1442-7

Keywords

Navigation