Skip to main content
Log in

View synthesis for FTV systems based on a minimum spatial distance and correspondence field

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

The main problems with virtual view synthesis based on the common Depth-Image-Based Rendering (DIBR) algorithms are image rectification, depth map and image de-rectification that lead to additional computational load and image distortion. In this paper an efficient and reliable method based on the concept of Correspondence Field and minimum distance among spatial positions of corresponding pixels is proposed to synthesize virtual view images without image rectification, depth map and image de-rectification steps. Simulated multi-view images are used to evaluate the proposed algorithm. By comparison with DIBR algorithms, simulation results show that on average, PSNR is 4.37 dB (14.8%) higher, SSIM is 0.057 (6.2%) more, UNIQUE is 0.13 (20%) more, the running time is 47.34 s (24.5%) less and wrong pixels are 4.35 (38.5%) less.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

References

  • Bosc, E., Pepion, R., Le Callet, P., et al. (2011). Towards a new quality metric for 3-D synthesized view assessment. IEEE Journal of Selected Topics in Signal Processing, 5(7), 1332–1343.

    Article  Google Scholar 

  • Chen, K. H. (2013). Reducing computation redundancy for high efficiency view synthesis. In IEEE International Symposium on VLSI Design, Automation and Test, Taiwan, pp. 1–4.

  • Cheng, C., Lin, S., & Lai, S. (2011). Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays. IEEE Transactions on Broadcasting, 57(2), 523–532.

    Article  Google Scholar 

  • Gao, Y., Chen, H., Gao, W., et al. (2013). Virtual view synthesis based on DIBR and image inpainting. In PSIVT 6th Pacific-rim symposium, Mexico, pp. 172–183.

  • Ham, B., Min, D., Oh, C., et al. (2014). Probability-based rendering for view synthesis. IEEE Transaction on image processing, 23(2), 870–884.

    Article  MathSciNet  MATH  Google Scholar 

  • Ham, B., Min, D., & Sohn, K. (2015). Depth superresolution by transduction. IEEE Transactions on Image Processing, 24(5), 1524–1535.

    Article  MathSciNet  MATH  Google Scholar 

  • Huiyan, H., Xie, H., & Fengbao, Y. (2013). Rectification of uncalibrated images for stereo vision. TELKOMNIKA Indonesian Journal of Electrical Engineering, 11(1), 322–329.

    Article  Google Scholar 

  • Jain, A. K., Tran, L. C., Khoshabeh, R., et al. (2011). Efficient stereo-to-multiview synthesis. In IEEE international conference on acoustics, speech and signal processing, Prague, pp. 889–892.

  • Kanade, T., Narayanan, P. J., Rander, P. W. (1995). Virtualised reality: Concepts and early results. In Proceedings of IEEE workshop on representation of visual scenes, Cambridge, MA, USA, pp. 69–76.

  • Karami, M., Mousavinia, A., & Ehsanian, M. (2017). A general solution for iso-disparity layers and correspondence field model for stereo systems. IEEE Sensors Journal, 17(12), 3744–3753.

    Article  Google Scholar 

  • Kim, H., Park, S., Wang, J. et al. (2009). Advanced bilinear image interpolation based on edge features. In IEEE First international conference on advances in multimedia, Colmar, France, pp. 33–36.

  • Kumar, S., Micheloni, C., Piciarelli, C., et al. (2010). Stereo rectification of uncalibrated and heterogeneous images. Elsevier Pattern Recognition Letters, 31(11), 1445–1452.

    Article  Google Scholar 

  • Lee, C., Ho, Y. (2011). View extrapolation method using depth map for 3D video systems. In Asia-Pacific signal and information processing association, China.

  • Lei, J., Li, L., Yue, H., et al. (2017). Depth map super-resolution considering view synthesis quality. IEEE Transactions on Image Processing, 26(4), 1732–1745.

    Article  MathSciNet  MATH  Google Scholar 

  • Lu, J., Lafruit, G., & Catthoor, F. (2009). Stream-centric stereo matching and view synthesis: A high-speed approach on GPUs. IEEE Transactions on Circuits and Systems for Video Technology, 19(1), 1598–1611.

    Google Scholar 

  • Ma, Zh, Rana, P. K., Taghia, J., et al. (2014). Bayesian estimation of Dirichlet mixture model with variational inference. ELSEVIER Pattern Recognition, 47(9), 3143–3157.

    Article  MATH  Google Scholar 

  • Manap, N., Soraghan, J. (2011). Novel view synthesis based on depth map layers representation. In 3DTV conference: The true vision—capture, transmission and display of 3D Video, Antalya, Turkey, pp. 1–4.

  • Manap, N., & Soraghan, J. (2012). Disparity refinement based on depth Image layers separation for stereo matching algorithms. Journal of Telecommunication, Electronic and Computer Engineering, 4(1), 51–64.

    Google Scholar 

  • Mao, Y., Cheung, G., Ji, Y. (2014). Image interpolation for DIBR view synthesis using graph fourier transform. In IEEE 3DTV-conference: The true vision—capture, transmission and display of 3D video, Budapest, pp. 1–4.

  • May, S., Droeschel, D. Fuchs, S., et al. (2009). Robust 3D-mapping with time-of-flight cameras. In IEEE/RSJ international conference on intelligent robots and systems, St. Louis, MO, USA.

  • Oh, K., Yea, S., Vetro, A., et al. (2010). Virtual view synthesis method and self-evaluation metrics for free viewpoint television and 3D video. Imaging System and Technology, 20(4), 378–390.

    Article  Google Scholar 

  • Paradiso, V., Lucenteforte, M., Grangetto, M. (2012). A novel interpolation method for 3D view synthesis. In IEEE 3DTV-conference: The True Vision— capture transmission and display of 3D video, Switzerland, pp. 1–4.

  • Park, J., Choi, J., Ryu, I., et al. (2012). Universal view synthesis unit for glassless 3DTV. IEEE Transactions on Consumer Electronics, 58(2), 706–711.

    Article  Google Scholar 

  • Po, L., Zhang, S., Xu, X., et al. (2011). A new multidirectional extrapolation hole-filling method for Depth-Image-Based Rendering. In IEEE 18th international conference on image processing, Brussels, Belgium, pp. 2589–2592.

  • Rana, P., Taghia, J., Zhanyu, M., et al. (2015). Probabilistic multiview depth image enhancement using variational inference. IEEE Journal of Selected Topics in Signal Processing, 9(3), 435–448. http://ieeexplore.ieee.org/search/searchresult.jsp?searchWithin=%22Authors%22:.QT.Flierl,%20M..QT.&newsearch=true

  • Safaei, F., Mokhtarian, P., Shidanshidi, H., et al. (2013). Scene-adaptive configuration of multiple cameras using the correspondence field function. In IEEE international conference on multimedia and expo, San Jose, CA, pp. 1–6.

  • Schuon, S., Theobalt, C., Davis, J., et al. (2008). High-quality scanning using time-of-flight depth superresolution. In IEEE computer vision and pattern recognition workshops, Anchorage, AK, USA, pp. 1–7.

  • Solh, M., & Airegib, G. (2012). Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video. IEEE Journal of Selected Topics in Signal Processing, 6(5), 495–504.

    Article  Google Scholar 

  • Su, H., & He, B. (2011). Stereo rectification of calibrated image pairs based on geometric transformation. I.J.Modern Education and Computer Science, 4, 17–24.

    Article  Google Scholar 

  • Temel, D., Prabhushankar, M., & Alregib, Gh. (2016). UNIQUE: unsupervised Image Quality Estimation. IEEE Signal Processing Letters, 23(10), 1414–1418.

    Article  Google Scholar 

  • Yaguchi, S., Saito, H. (2001). Arbitrary view image generation from multiple silhouette images in projective grid space. In SPIE videometrics and optical methods for 3D shape measurement, San Jose, CA, pp. 294–304.

  • Zhang, D., & Liang, J. (2015). View synthesis distortion estimation with a graphical model and recursive calculation of probability distribution. IEEE Transactions on Circuits and Systems for Video Technology, 25(5), 827–840.

    Article  MathSciNet  Google Scholar 

  • Zhu, C., & Li, S. (2016). Depth image based view synthesis: new insights and perspectives on hole generation and filling. IEEE Trans. Broadcasting, 62(1), 82–93.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amir Mousavinia.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hosseinpour, H., Mousavinia, A. View synthesis for FTV systems based on a minimum spatial distance and correspondence field. Multidim Syst Sign Process 30, 275–294 (2019). https://doi.org/10.1007/s11045-018-0556-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-018-0556-6

Keywords

Navigation