Skip to main content
Log in

Real-time 2D to 3D Image Conversion Algorithm and VLSI Architecture for Natural Scene

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

This paper presents a high-performance 2D-to-3D conversion algorithm technique and its VLSI architecture design for natural scenes. In this study, the depth map was generated based on the horizontal dividing line concept. Also, the dividing line was estimated by obvious color difference on the local region, and the current dividing line was a demarcation from the maximum value to the minimum one for the depth map. A Depth Based Image Rendering technology was employed to generate stereoscopic images according to the image channel of the depth map. Based on the proposed algorithm, a real-time VLSI architecture is presented. Also, for the purpose of cost efficiency, a module-based hardware consisting of a timing schedule control should be designed. The multiplications and divisions of the algorithm can be minimized by circuit design, and so this helps reduce the complexity of this system. Only one frame memory is required to generate real-time 3D images using the depth map. The depth map can be immediately calculated by referring to the location of dividing lines, which can reduce the storage size and I/O bandwidth. The circuit is simulated and verified by one FPGA chip. The critical path is at the multiplex and one flip-flop, and the maximum clock rate can achieve 205 MHz. To be VGA compatible, the stereoscopic RGB pixels can be outputted in parallel per clock to the interface. The overall maximum data rates of the proposed 3D conversion chip can achieve 610 M bytes per second. This can meet the real-time HD requirement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig.12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

Similar content being viewed by others

References

  1. C.C. Cheng, C.T. Li, L.G. Chen, A novel 2D-to-3D conversion system using edge information. IEEE Trans. Consumer Electron. 56(3), 1739–1745 (2010)

    Article  Google Scholar 

  2. S. Choi, D. Min, B. Ham, Y. Kim, C. Oh, K. Sohn, Depth analogy: data-driven approach for single image depth estimation using gradient samples. IEEE Trans. Image Process. 24(12), 5953–5966 (2015)

    Article  MathSciNet  Google Scholar 

  3. Y.C. Fan, Y.C. Chen, S.-Y. Chou, Vivid-DIBR based 2D–3D image conversion system for 3D display. IEEE J. Display Technol. 10(10), 887–898 (2014)

    Article  Google Scholar 

  4. Y. Feng, J. Ren, J. Jiang, Object-based 2D-to-3D video conversion for effective stereo content generation in 3D-TV applications. IEEE Trans. Broadcast. 57(2), 500–509 (2011)

    Article  Google Scholar 

  5. Y.C. Fan, B.L. Lin, S.Y. Chou, Hardware structure of 2D to 3D image conversion system for digital archives, in Proceedings of 17th International Symposium on Consumer Electronics (ISCE), (IEEE, Hsinchu, Taiwan, 2013), pp. 111–112

  6. W. Huang, X. Cao, K. Lu, Q. Dai, A.C. Bovik, Toward naturalistic 2D-to-3D conversion. IEEE Trans. Image Process. 24(2), 724–733 (2015)

    Article  MathSciNet  Google Scholar 

  7. J.L. Herrera, C.R. del-Blanco, N. García, A novel 2D to 3D video conversion system based on a machine learning approach. IEEE Trans. Consumer Electron. 62(4), 429–436 (2016)

    Article  Google Scholar 

  8. J.L. Herrera, C.R. del-Blanco, N. García, Automatic depth extraction from 2D images using a cluster-based learning framework. IEEE Trans. Image Process. 27(7), 3288–3299 (2018)

    Article  MathSciNet  Google Scholar 

  9. Y.R. Horng, Y.C. Tseng, T.S. Chang, VLSI architecture for real time HD1080p view synthesis engine. IEEE Trans. Circuits Syst. Video Technol. 21(9), 1329–1340 (2011)

    Article  Google Scholar 

  10. M. Kim, 2D-to-3D conversion using color and edge, in International SoC design conference (ISOCC), (IEEE, Jeju, Korea, 2014), pp.171–172

  11. H. Kumar, A.S. Yadav, S. Gupta, K.S. Venkatesh, Depth map estimation using defocus and motion cues. IEEE Trans. Circuits Syst. Video Technol. 29(5), 1365–1379 (2019)

    Article  Google Scholar 

  12. J. Lee, Y. Kim, S. Lee, B. Kim, J. Noh, High-quality depth estimation using an exemplar 3D model for stereoscopic conversion. IEEE Trans. Visual. Comput. Graph. 21(7), 835–846 (2015)

    Article  Google Scholar 

  13. Y.K. Lai, Y.F. Lai, Y.C. Chen, An effective hybrid depth-generation algorithm for 2D-to-3D conversion in 3D displays. IEEE J. Display Technol. 9(3), 154–161 (2013)

    Article  Google Scholar 

  14. H. Lee, Y. Chung, 2D-to-3D conversion based hybrid frame discard method for 3D IPTV systems. IEEE Trans. Consumer Electron. 62(4), 463–470 (2016)

    Article  Google Scholar 

  15. Y. Li, L. Claesen, K. Huang, M. Zhao, A real-time high-quality complete system for depth image-based rendering on FPGA. IEEE Trans. Circuits Syst. Video Technol. 29(4), 1179–1193 (2019)

    Article  Google Scholar 

  16. Micron, Automotive DDR SDRAM MT41J128M16La

  17. H. Mohaghegh, N. Karimi, S.M.R. Soroushmehr, S. Samavi, K. Najarian, Aggregation of rich depth-aware features in a modified stacked generalization model for single image depth estimation. IEEE Trans. Circuits Syst. Video Technol. 29(3), 683–697 (2019)

    Article  Google Scholar 

  18. Y. Niu, F. Liu, W.C. Feng, H. Jin, Aesthetics-based stereoscopic photo cropping for heterogeneous displays. IEEE Trans. Multimedia 14(3), 783–796 (2012)

    Article  Google Scholar 

  19. R. Phan, D. Androutsos, Robust semi-automatic depth map generation in unconstrained images and video sequences for 2D to stereoscopic 3D conversion. IEEE Trans. Multimedia 16(1), 122–136 (2014)

    Article  Google Scholar 

  20. Samir Palnitkar, “Veriolg HDL,” Prentice Hall, Nj07458, 1996

  21. C. Ttofis, C. Kyrkou, T. Theocharides, A low-cost real-time embedded stereo vision system for accurate disparity estimation based on guided image filtering. IEEE Trans. Comput. 65(9), 2678–2693 (2016)

    Article  MathSciNet  Google Scholar 

  22. W. Xu, S. Yin, Z. Zhang, H. Dong, R. Shi, L. Liu, S. We, Reconfigurable VLSI architecture for real-time 2D-to-3D conversion. IEEE Access 5(11), 26604–26613 (2017)

    Article  Google Scholar 

  23. Xilinx, The field programming gate array, Web: www.xilinx.com

  24. H. Yuan, S. Wu, P. Cheng, P. An, S. Bao, Nonlocal random walks algorithm for semi-automatic 2D-to-3D image conversion. IEEE Signal Process. Lett. 22(3), 371–374 (2015)

    Article  Google Scholar 

  25. S.J. Yao, L.H. Wang, D.X. Li, M. Zhang, Real-time full HD 2D-to-3D video conversion system based on FPGA, in Seventh International Conference on Image and Graphics, (IEEE, Qingdao, China, 2013), pp. 774–778

  26. Z. Zhang, S. Yin, L. Liu, S. Wei, A real-time time-consistent 2D-to-3D video conversion system using color histogram. IEEE Trans. Consum. Electron. 61(4), 524–530 (2015)

    Article  Google Scholar 

  27. X. Zhang, H. Dai, H. Sun, N. Zheng, Algorithm and VLSI architecture co-design on efficient semi-global stereo matching. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4390–4403 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shih-Chang Hsia.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hsia, SC., Wang, SH. & Tsai, HC. Real-time 2D to 3D Image Conversion Algorithm and VLSI Architecture for Natural Scene. Circuits Syst Signal Process 41, 4455–4478 (2022). https://doi.org/10.1007/s00034-022-01983-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-022-01983-y

Keywords

Navigation