skip to main content
10.1145/3565516.3565519acmconferencesArticle/Chapter ViewAbstractPublication PagescvmpConference Proceedingsconference-collections
research-article

Light Field GAN-based View Synthesis using full 4D information

Published:01 December 2022Publication History

ABSTRACT

Light Field (LF) technology offers a truly immersive experience having the potential to revolutionize entertainment, training, education, virtual and augmented reality, gaming, autonomous driving, and digital health. However, one of the main issues when working with LF is the amount of data needed to create a mesmerizing experience with realistic disparity, smooth motion parallax between views. In this paper, we introduce a learning based LF angular super-resolution approach for efficient view synthesis of novel virtual images. This is achieved by taking four corner views and then generating up to five in-between views. Our generative adversarial network approach uses LF spatial and angular information to ensure smooth disparity between the generated and original views. We consider plenoptic, synthetic LF content and camera array implementations which support different baseline settings. Experimental results show that our proposed method outperforms state-of-the-art light field view synthesis techniques, offering novel generated views with high visual quality.

References

  1. John Flynn, Michael Broxton, Paul Debevec, Matthew Duvall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. “DeepView: View Synthesis with Learned Gradient Descent.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2019-June (June): 2362–71. https://doi.org/10.48550/arxiv.1906.07316.Google ScholarGoogle ScholarCross RefCross Ref
  2. Katrin Honauer, Ole Johannsen, Daniel Kondermann, and Bastian Goldluecke. 2016. “A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields.” In Asian Conference on Computer Vision, 19–34. Springer, Cham.Google ScholarGoogle Scholar
  3. Phillip Isola, Jun Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. “Image-to-Image Translation with Conditional Adversarial Networks.” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-January (November): 5967–76. https://doi.org/10.1109/CVPR.2017.632.Google ScholarGoogle ScholarCross RefCross Ref
  4. J. Jin, J. Hou, H. Yuan, and S. Kwong. 2020. “Learning Light Field Angular Super-Resolution via a Geometry-Aware Network.” In AAAI, 11141–48.Google ScholarGoogle ScholarCross RefCross Ref
  5. NK Kalantari, TC Wang, and R Ramamoorthi. 2016. “Learning-Based View Synthesis for Light Field Cameras.” ACM Transactions. https://dl.acm.org/citation.cfm?id=2980251.Google ScholarGoogle Scholar
  6. Zhi Li, Anne Aaron, Anush Katsavounidis, Ioannis Moorthy, and Megha Manohara. 2016. “Toward A Practical Perceptual Video Quality Metric.” Netflix Blog.Google ScholarGoogle Scholar
  7. P. Matysiak, M. Grogan, M. Le Pendu, M. Alain, and A. Smolic. 2018. “A Pipeline for Lenslet Light Field Quality Enhancement.” In 25th IEEE International Conference on Image Processing (ICIP), 639–43. IEEE. https://ieeexplore.ieee.org/abstract/document/8451544/.Google ScholarGoogle Scholar
  8. Nan Meng, Hayden K.H. So, Xing Sun, and Edmund Y. Lam. 2021. “High-Dimensional Dense Residual Convolutional Neural Network for Light Field Reconstruction.” IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (3): 873–86. https://doi.org/10.1109/TPAMI.2019.2945027.Google ScholarGoogle ScholarCross RefCross Ref
  9. J. Navarro, and N. Sabater. 2021. “Learning Occlusion-Aware View Synthesis for Light Fields.” Pattern Analysis and Applications 2021 24:3 24 (3): 1319–34. https://doi.org/10.1007/S10044-021-00956-2.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-December (September): 1874–83. https://doi.org/10.48550/arxiv.1609.05158.Google ScholarGoogle ScholarCross RefCross Ref
  11. Stanford Lytro Light Field Archive. 2018. http://lightfields.stanford.edu/LF2016.html.Google ScholarGoogle Scholar
  12. The (New) Stanford Light Field Archive. 2008. http://lightfield.stanford.edu/.Google ScholarGoogle Scholar
  13. UBC DML Light Field Dataset Captured with Lytro ILLUM. 2019. http://dml.ece.ubc.ca/data/DML-LF/.Google ScholarGoogle Scholar
  14. Abrar Wafa, and Panos Nasiopoulos. 2022. “GAN-Based Light Field Spatial Super-Resolution Using Spatial and Angular Information.” Submited for Publication.Google ScholarGoogle Scholar
  15. Abrar Wafa, Mahsa T. Pourazad, and Panos Nasiopoulos. 2021. “LEARNING-BASED LIGHT FIELD VIEW SYNTHESIS FOR EFFICIENT TRANSMISSION AND STORAGE.” In Proceedings - International Conference on Image Processing, ICIP, 2021-September:354–58. IEEE Computer Society. https://doi.org/10.1109/ICIP42928.2021.9506449.Google ScholarGoogle ScholarCross RefCross Ref
  16. Yunlong Wang, Fei Liu, Zilei Wang, Guangqi Hou, Zhenan Sun, and Tieniu Tan. 2018. “End-to-End View Synthesis for Light Field Imaging with Pseudo 4DCNN.” In Proceedings of the European Conference on Computer Vision (ECCV), 333–48. https://doi.org/10.1007/978-3-030-01216-8_21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Sven Wanner, and Bastian Goldluecke. 2012. “Spatial and Angular Variational Super-Resolution of 4D Light Fields.” In European Conference on Computer Vision, 608–21. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33715-4_44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu. 2017. “Light Field Reconstruction Using Deep Convolutional Network on EPI.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6319–27. http://openaccess.thecvf.com/content_cvpr_2017/papers/Wu_Light_Field_Reconstruction_CVPR_2017_paper.pdf.Google ScholarGoogle ScholarCross RefCross Ref
  19. Henry Wing Fung Yeung, Junhui Hou, Jie Chen, Yuk Ying Chung, and Xiaoming Chen. 2018. “Fast Light Field Reconstruction with Deep Coarse-to-Fine Modeling of Spatial-Angular Clues.” In Proceedings of the European Conference on Computer Vision (ECCV), 137–52. https://doi.org/10.1007/978-3-030-01231-1_9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Youngjin Yoon, Hae-Gon Jeon, Donggeun Yoo, Joon-Young Lee, and In So Kweon. 2017. “Light-Field Image Super-Resolution Using Convolutional Neural Network.” IEEE Signal Processing Letters 24 (6): 848–52. https://doi.org/10.1109/LSP.2017.2669333.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Light Field GAN-based View Synthesis using full 4D information

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CVMP '22: Proceedings of the 19th ACM SIGGRAPH European Conference on Visual Media Production
        December 2022
        97 pages
        ISBN:9781450399395
        DOI:10.1145/3565516

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 December 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate40of67submissions,60%
      • Article Metrics

        • Downloads (Last 12 months)47
        • Downloads (Last 6 weeks)2

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format