ABSTRACT
Light Field (LF) technology offers a truly immersive experience having the potential to revolutionize entertainment, training, education, virtual and augmented reality, gaming, autonomous driving, and digital health. However, one of the main issues when working with LF is the amount of data needed to create a mesmerizing experience with realistic disparity, smooth motion parallax between views. In this paper, we introduce a learning based LF angular super-resolution approach for efficient view synthesis of novel virtual images. This is achieved by taking four corner views and then generating up to five in-between views. Our generative adversarial network approach uses LF spatial and angular information to ensure smooth disparity between the generated and original views. We consider plenoptic, synthetic LF content and camera array implementations which support different baseline settings. Experimental results show that our proposed method outperforms state-of-the-art light field view synthesis techniques, offering novel generated views with high visual quality.
- John Flynn, Michael Broxton, Paul Debevec, Matthew Duvall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. “DeepView: View Synthesis with Learned Gradient Descent.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2019-June (June): 2362–71. https://doi.org/10.48550/arxiv.1906.07316.Google ScholarCross Ref
- Katrin Honauer, Ole Johannsen, Daniel Kondermann, and Bastian Goldluecke. 2016. “A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields.” In Asian Conference on Computer Vision, 19–34. Springer, Cham.Google Scholar
- Phillip Isola, Jun Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. “Image-to-Image Translation with Conditional Adversarial Networks.” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-January (November): 5967–76. https://doi.org/10.1109/CVPR.2017.632.Google ScholarCross Ref
- J. Jin, J. Hou, H. Yuan, and S. Kwong. 2020. “Learning Light Field Angular Super-Resolution via a Geometry-Aware Network.” In AAAI, 11141–48.Google ScholarCross Ref
- NK Kalantari, TC Wang, and R Ramamoorthi. 2016. “Learning-Based View Synthesis for Light Field Cameras.” ACM Transactions. https://dl.acm.org/citation.cfm?id=2980251.Google Scholar
- Zhi Li, Anne Aaron, Anush Katsavounidis, Ioannis Moorthy, and Megha Manohara. 2016. “Toward A Practical Perceptual Video Quality Metric.” Netflix Blog.Google Scholar
- P. Matysiak, M. Grogan, M. Le Pendu, M. Alain, and A. Smolic. 2018. “A Pipeline for Lenslet Light Field Quality Enhancement.” In 25th IEEE International Conference on Image Processing (ICIP), 639–43. IEEE. https://ieeexplore.ieee.org/abstract/document/8451544/.Google Scholar
- Nan Meng, Hayden K.H. So, Xing Sun, and Edmund Y. Lam. 2021. “High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction.” IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (3): 873–86. https://doi.org/10.1109/TPAMI.2019.2945027.Google ScholarCross Ref
- J. Navarro, and N. Sabater. 2021. “Learning Occlusion-Aware View Synthesis for Light Fields.” Pattern Analysis and Applications 2021 24:3 24 (3): 1319–34. https://doi.org/10.1007/S10044-021-00956-2.Google ScholarDigital Library
- Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-December (September): 1874–83. https://doi.org/10.48550/arxiv.1609.05158.Google ScholarCross Ref
- Stanford Lytro Light Field Archive. 2018. http://lightfields.stanford.edu/LF2016.html.Google Scholar
- The (New) Stanford Light Field Archive. 2008. http://lightfield.stanford.edu/.Google Scholar
- UBC DML Light Field Dataset Captured with Lytro ILLUM. 2019. http://dml.ece.ubc.ca/data/DML-LF/.Google Scholar
- Abrar Wafa, and Panos Nasiopoulos. 2022. “GAN-Based Light Field Spatial Super-Resolution Using Spatial and Angular Information.” Submited for Publication.Google Scholar
- Abrar Wafa, Mahsa T. Pourazad, and Panos Nasiopoulos. 2021. “LEARNING-BASED LIGHT FIELD VIEW SYNTHESIS FOR EFFICIENT TRANSMISSION AND STORAGE.” In Proceedings - International Conference on Image Processing, ICIP, 2021-September:354–58. IEEE Computer Society. https://doi.org/10.1109/ICIP42928.2021.9506449.Google ScholarCross Ref
- Yunlong Wang, Fei Liu, Zilei Wang, Guangqi Hou, Zhenan Sun, and Tieniu Tan. 2018. “End-to-End View Synthesis for Light Field Imaging with Pseudo 4DCNN.” In Proceedings of the European Conference on Computer Vision (ECCV), 333–48. https://doi.org/10.1007/978-3-030-01216-8_21.Google ScholarDigital Library
- Sven Wanner, and Bastian Goldluecke. 2012. “Spatial and Angular Variational Super-Resolution of 4D Light Fields.” In European Conference on Computer Vision, 608–21. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33715-4_44.Google ScholarDigital Library
- G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu. 2017. “Light Field Reconstruction Using Deep Convolutional Network on EPI.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6319–27. http://openaccess.thecvf.com/content_cvpr_2017/papers/Wu_Light_Field_Reconstruction_CVPR_2017_paper.pdf.Google ScholarCross Ref
- Henry Wing Fung Yeung, Junhui Hou, Jie Chen, Yuk Ying Chung, and Xiaoming Chen. 2018. “Fast Light Field Reconstruction with Deep Coarse-to-Fine Modeling of Spatial-Angular Clues.” In Proceedings of the European Conference on Computer Vision (ECCV), 137–52. https://doi.org/10.1007/978-3-030-01231-1_9.Google ScholarDigital Library
- Youngjin Yoon, Hae-Gon Jeon, Donggeun Yoo, Joon-Young Lee, and In So Kweon. 2017. “Light-Field Image Super-Resolution Using Convolutional Neural Network.” IEEE Signal Processing Letters 24 (6): 848–52. https://doi.org/10.1109/LSP.2017.2669333.Google ScholarCross Ref
Index Terms
- Light Field GAN-based View Synthesis using full 4D information
Recommendations
Learning-based view synthesis for light field cameras
With the introduction of consumer light field cameras, light field imaging has recently become widespread. However, there is an inherent trade-off between the angular and spatial resolution, and thus, these cameras often sparsely sample in either ...
Camera array calibration for light field acquisition
Light field cameras are becoming popular in computer vision and graphics, with many research and commercial applications already having been proposed. Various types of cameras have been developed with the camera array being one of the ways of acquiring ...
Light field blender: designing optics and rendering methods for see-through and aerial near-eye display
SA '17: SIGGRAPH Asia 2017 Technical BriefsIn this study, we propose a novel head-mounted display (HMD) design for near-eye light field display which achieves a see-through and wide field of view for augmented reality. In the past years, many optical elements such as half-mirror, beamsplitter, ...
Comments