Abstract
Depth estimation in texture-less regions of the light field is an important research direction. However, there are few existing methods dedicated to this issue. We find that context information is significantly crucial for depth estimation in texture-less regions. In this paper, we propose a simple yet effective method called ContextNet for texture-less light field depth estimation by learning context information. Specifically, we aim to enlarge the receptive field of feature extraction by using dilated convolutions and increasing the training patch size. Moreover, we design the Augment SPP (AugSPP) module to aggregate features of multiple-scale and multiple-level. Extensive experiments demonstrate the effectiveness of our method, significantly improving depth estimation results in texture-less regions. The performance of our method outperforms the current state-of-the-art methods (e.g., LFattNet, DistgDisp, OACC-Net, and SubFocal) on the UrbanLF-Syn dataset in terms of MSE \(\times \)100, BadPix 0.07, BadPix 0.03, and BadPix 0.01. Our method also ranks third place of comprehensive results in the competition about LFNAT Light Field Depth Estimation Challenge at CVPR 2023 Workshop without any post-processing steps (The code and model are available at https://github.com/chaowentao/ContextNet.).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
http://www.lfchallenge.com/dp_lambertian_plane_result/. On the benchmark, the name of our method is called SF-Net.
References
Chao, W., Duan, F., Wang, X., Wang, Y., Wang, G.: Occcasnet: occlusion-aware cascade cost volume for light field depth estimation. arXiv preprint arXiv:2305.17710 (2023)
Chao, W., Wang, X., Wang, Y., Wang, G., Duan, F.: Learning sub-pixel disparity distribution for light field depth estimation. TCI Early Access, 1–12 (2023)
Chen, J., Zhang, S., Lin, Y.: Attention-based multi-level fusion network for light field depth estimation. In: AAAI, pp. 1009–1017 (2021)
Chen, J., Chau, L.: Light field compressed sensing over a disparity-aware dictionary. TCSVT 27(4), 855–865 (2017)
Chen, Y., Zhang, S., Chang, S., Lin, Y.: Light field reconstruction using efficient pseudo 4d epipolar-aware structure. TCI 8, 397–410 (2022)
Cheng, Z., Liu, Y., Xiong, Z.: Spatial-angular versatile convolution for light field reconstruction. TCI 8, 1131–1144 (2022)
Cheng, Z., Xiong, Z., Chen, C., Liu, D., Zha, Z.J.: Light field super-resolution with zero-shot learning. In: CVPR, pp. 10010–10019 (2021)
Guo, C., Jin, J., Hou, J., Chen, J.: Accurate light field depth estimation via an occlusion-aware network. In: ICME, pp. 1–6 (2020)
Han, K., Xiang, W., Wang, E., Huang, T.: A novel occlusion-aware vote cost for light field depth estimation. TPAMI 44(11), 8022–8035 (2022)
He, L., Wang, G., Hu, Z.: Learning depth from single images with deep neural network embedding focal length. TIP 27(9), 4676–4689 (2018)
Heber, S., Pock, T.: Convolutional networks for shape from light field. In: CVPR, pp. 3746–3754 (2016)
Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2
Jeon, H.G., et al.: Accurate depth map estimation from a lenslet light field camera. In: CVPR, pp. 1547–1555 (2015)
Jin, J., Hou, J., Chen, J., Kwong, S.: Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization. In: CVPR, pp. 2260–2269 (2020)
Jin, J., Hou, J., Chen, J., Zeng, H., Kwong, S., Yu, J.: Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion. TPAMI 44, 1819–1836 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. TPAMI 37(9), 1904–1916 (2015)
Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.H.: Scene reconstruction from high spatio-angular resolution light fields. TOG 32(4), 73–1 (2013)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Meng, N., So, H.K.H., Sun, X., Lam, E.Y.: High-dimensional dense residual convolutional neural network for light field reconstruction. TPAMI 43(3), 873–886 (2019)
Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Ph.D. thesis, Stanford University (2005)
Peng, J., Xiong, Z., Wang, Y., Zhang, Y., Liu, D.: Zero-shot depth estimation from light field using a convolutional neural network. TCI 6, 682–696 (2020)
Sheng, H., Cong, R., Yang, D., Chen, R., Wang, S., Cui, Z.: Urbanlf: a comprehensive light field dataset for semantic segmentation of urban scenes. TCSVT 32(11), 7880–7893 (2022)
Shin, C., Jeon, H.G., Yoon, Y., Kweon, I.S., Kim, S.J.: Epinet: a fully-convolutional neural network using epipolar geometry for depth from light field images. In: CVPR, pp. 4748–4757 (2018)
Tao, M.W., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: ICCV, pp. 673–680 (2013)
Tsai, Y.J., Liu, Y.L., Ouhyoung, M., Chuang, Y.Y.: Attention-based view selection networks for light-field disparity estimation. In: AAAI, pp. 12095–12103 (2020)
Van Duong, V., Huu, T.N., Yim, J., Jeon, B.: Light field image super-resolution network via joint spatial-angular and epipolar information. TCI 9, 350–366 (2023)
Wang, Y., Wang, L., Liang, Z., Yang, J., An, W., Guo, Y.: Occlusion-aware cost constructor for light field depth estimation. In: CVPR, pp. 19809–19818 (2022)
Wang, Y., Wang, L., Liang, Z., Yang, J., Timofte, R., Guo, Y.: Ntire 2023 challenge on light field image super-resolution: dataset, methods and results. arXiv preprint arXiv:2304.10415 (2023)
Wang, Y., et al.: Disentangling light fields for super-resolution and disparity estimation. TPAMI 45, 425–443 (2022)
Wang, Y., Yang, J., Guo, Y., Xiao, C., An, W.: Selective light field refocusing for camera arrays using bokeh rendering and superresolution. SPL 26(1), 204–208 (2018)
Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. TPAMI 36(3), 606–619 (2014)
Williem, W., Park, I.K.: Robust light field depth estimation for noisy scene with occlusion. In: CVPR, pp. 4396–4404 (2016)
Wu, G., Liu, Y., Fang, L., Dai, Q., Chai, T.: Light field reconstruction using convolutional network on epi and extended applications. TPAMI 41(7), 1681–1694 (2018)
Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)
Yu, J.: A light-field journey to virtual reality. TMM 24(2), 104–112 (2017)
Zhang, S., Lin, Y., Sheng, H.: Residual networks for light field image super-resolution. In: CVPR, pp. 11046–11055 (2019)
Zhang, S., Sheng, H., Li, C., Zhang, J., Xiong, Z.: Robust depth estimation for light field via spinning parallelogram operator. CVIU 145, 148–159 (2016)
Zhang, Y., et al.: Light-field depth estimation via epipolar plane image analysis and locally linear embedding. TCSVT 27(4), 739–747 (2016)
Zhang, Y., Dai, W., Xu, M., Zou, J., Zhang, X., Xiong, H.: Depth estimation from light field using graph-based structure-aware analysis. TCSVT 30(11), 4269–4283 (2019)
Zhu, H., Wang, Q., Yu, J.: Occlusion-model guided antiocclusion depth estimation in light field. J-STSP 11(7), 965–978 (2017)
Acknowledgement
This work is supported by the National Key Research and Development Project Grant, Grant/Award Number: 2018AAA0100802.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chao, W., Wang, X., Kan, Y., Duan, F. (2024). ContextNet: Learning Context Information for Texture-Less Light Field Depth Estimation. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14430. Springer, Singapore. https://doi.org/10.1007/978-981-99-8537-1_2
Download citation
DOI: https://doi.org/10.1007/978-981-99-8537-1_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8536-4
Online ISBN: 978-981-99-8537-1
eBook Packages: Computer ScienceComputer Science (R0)