Skip to main content

Light Field Reconstruction Using Dynamically Generated Filters

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11961))

Included in the following conference series:

Abstract

Densely-sampled light fields have already show unique advantages in applications such as depth estimation, refocusing, and 3D presentation. But it is difficult and expensive to access. Commodity portable light field cameras, such as Lytro and Raytrix, are easy to carry and easy to operate. However, due to the camera design, there is a trade-off between spatial and angular resolution, which can not be sampled intensively at the same time. In this paper, we present a novel learning-based light field reconstruction approach to increase the angular resolution of a sparsely-sample light field image. Our approach treats the reconstruction problem as the filtering operation on the sub-aperture images of input light field and uses a deep neural network to estimate the filtering kernels for each sub-aperture image. Our network adopts a U-Net structure to extract feature maps from input sub-aperture images and angular coordinate of novel view, then a filter-generating component is designed for kernel estimation. We compare our method with existing light field reconstruction methods with and without depth information. Experiments show that our method can get much better results both visually and quantitatively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use sparsely-sampled LF to refer light field sampled sparsely in angular domain.

References

  1. Lytro illum. https://www.lytro.com

  2. Raytrix. https://raytrix.de

  3. Aghajanyan, A.: Convolution aware initialization. arXiv:1702.06295v3 (2017)

  4. Bansal, A., Chen, X., Russell, B.C., Gupta, A., Ramanan, D.: Pixelnet: Representation of the pixels, by the pixels, and for the pixels. arXiv: 1702.06506v1 (2017)

  5. Bishop, T.E., Zanetti, S., Favaro, P.: Light field superresolution. In: Proceedings of the IEEE International Conference on Computational Photography, pp. 1–9 (2009)

    Google Scholar 

  6. Chen, J., Hou, J., Ni, Y., Chau, L.: Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Trans. Image Process. 27(10), 4889–4900 (2018)

    Article  MathSciNet  Google Scholar 

  7. Cho, D., Lee, M., Kim, S., Tai, Y.W.: Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3280–3287 (2013)

    Google Scholar 

  8. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13

    Chapter  Google Scholar 

  9. Finn, C., Goodfellow, I., Levine, S.: Unsupervised learning for physical interaction through video prediction. In: Advances in Neural Information Processing Systems, pp. 64–72 (2016)

    Google Scholar 

  10. Ihrke, I., Restrepo, J., Mignarddebise, L.: Principles of light field imaging: briefly revisiting 25 years of research. IEEE Signal Process. Mag. 33(5), 59–69 (2016)

    Article  Google Scholar 

  11. Jeon, H., et al.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1547–1555 (2015)

    Google Scholar 

  12. Jia, X., De Brabandere, B., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: Advances in Neural Information Processing Systems, pp. 667–675 (2016)

    Google Scholar 

  13. Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193:1–193:10 (2016)

    Article  Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)

  15. Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42 (1996)

    Google Scholar 

  16. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CTSR 2005-02, Stanford Univ. (2005)

    Google Scholar 

  17. Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 261–270 (2017)

    Google Scholar 

  18. Rigamonti, R., Sironi, A., Lepetit, V., Fua, P.: Learning separable filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2754–2761 (2013)

    Google Scholar 

  19. Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 12 (2014)

    Article  Google Scholar 

  20. Tao, M.W., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 673–680 (2013)

    Google Scholar 

  21. Wang, T., Efros, A.A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3487–3495 (2015)

    Google Scholar 

  22. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)

    Article  Google Scholar 

  23. Wilburn, B., et al.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)

    Article  MathSciNet  Google Scholar 

  24. Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1638–1646 (2017)

    Google Scholar 

  25. Xue, T., Wu, J., Bouman, K., Freeman, B.: Visual dynamics: probabilistic future frame synthesis via cross convolutional networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2016)

    Google Scholar 

  26. Yoon, Y., Jeon, H., Yoo, D., Lee, J., Kweon, I.S.: Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 24(6), 848–852 (2017)

    Article  Google Scholar 

  27. Zhao, Q., Dai, F., Lv, J., Ma, Y., Zhang, Y.: Panoramic light fieldfrom hand-held video and its sampling for real-time rendering. IEEE Trans. Circuits Syst. Video Technol. (2019). https://doi.org/10.1109/TCSVT.2019.2900051

Download references

Acknowledgements

This work was supported by National Key R&D Program of China (2018YFB0804203), National Natural Science Foundation of China (U153124, 61702479,61771458), the Science and Technology Service Network Initiative of the Chinese Academy of Sciences (KFJ-STS-ZDTP-070), and Beijing Municipal Natural Science Foundation Cooperation Beijing Education Committee: No. KZ 201810005002.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jing, X., Ma, Y., Zhao, Q., Lyu, K., Dai, F. (2020). Light Field Reconstruction Using Dynamically Generated Filters. In: Ro, Y., et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37731-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37730-4

  • Online ISBN: 978-3-030-37731-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics