Abstract
Estimating \(360^{\circ }\) depth information has attracted considerable attention due to the fast development of emerging \(360^{\circ }\) cameras. However, most researches only focus on dealing with the distortion of \(360^{\circ }\) images without considering the geometric information of \(360^{\circ }\) images, leading to poor performance. In this paper, we conduct to apply indoor structure regularities for self-supervised \(360^{\circ }\) image depth estimation. Specifically, we carefully design two geometric constraints for efficient model optimization including dominant direction normal constraint and planar consistency depth constraint. The dominant direction normal constraint enables to align the normal of indoor \(360^{\circ }\) images with the direction of vanishing points. The planar consistency depth constraint is utilized to fit the estimated depth of each pixel by its 3D plane. Hence, incorporating these two geometric constraints can further facilitate the generation of accurate depth results for \(360^{\circ }\) images. Extensive experiments illustrate that our designed method improves \(\delta _1\) by an average of 4.82% compared to state-of-the-art methods on Matterport3D and Stanford2D3D datasets within 3D60.
This work was supported in part by the National Natural Science Foundation of China (Grant 61871270, 62171134), in part by the Shenzhen Natural Science Foundation under Grants JCYJ20200109110410133 and 20200812110350001.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Zioulis, N., Karakottas, A., Zarpalas, D., Daras, P.: OmniDepth: dense depth estimation for indoors spherical panoramas. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 453–471. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_28
Wang, F.-E., Yeh, Y.-H., Sun, M., Chiu, W.-C., Tsai, Y.-H.: Bifuse: monocular 360 depth estimation via bi-projection fusion. In: Proceedings CVPR, pp. 462–471 (2020)
Wang, F.-E., et al.: Self-supervised learning of depth and camera motion from 360\(^\circ \) videos. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11365, pp. 53–68. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20873-8_4
Zioulis, N., Karakottas, A., Zarpalas, D., Alvarez, F., Daras, P.: Spherical view synthesis for self-supervised 360 depth estimation. In: Proceedings 3DV, pp. 690–699. IEEE (2019)
Lai, Z., Chen, D., Su, K.: Olanet: self-supervised \(360^{\circ }\) depth estimation with effective distortion-aware view synthesis and l1 smooth regularization. In: Proceedings ICME, pp. 1–6. IEEE (2021)
Zhang, Y., Song, S., Tan, P., Xiao, J.: PanoContext: a whole-room 3D context model for panoramic scene understanding. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 668–686. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_43
Fernandez-Labrador, C., Perez-Yus, A., Lopez-Nicolas, G., Guerrero, J.J.: Layouts from panoramic images with geometry and deep learning. IEEE Robot. Autom. Lett. 3(4), 3153–3160 (2018)
Yang, Z., Wang, P., Wang, Y., Xu, W., Nevatia, R.: Lego: learning edge with geometry all at once by watching videos. In: Proceedings CVPR, pp. 225–234 (2018)
Li, B., Huang, Y., Liu, Z., Zou, D., Yu, W.: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation. In: Proceedings CVPR, pp. 663–673 (2021)
Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004)
Yu, Z., Jin, L., Gao, S.: P\(^{2}\)Net: patch-match and plane-regularization for unsupervised indoor depth estimation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 206–222. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_13
Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency, In: Proceedings CVPR, pp. 270–279 (2017)
Paszke, A., et al.: Automatic differentiation in pytorch (2017)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings AISTATS, pp. 249–256. JMLR (2010)
Zhang, Y., et al.: ActiveStereoNet: end-to-end self-supervised learning for active stereo systems. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 802–819. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_48
Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: Proceedings CVPR, pp. 1746–1754 (2017)
Chang, A., et al.: Matterport3D: Learning from RGB-D data in indoor environments. In: Proceedings 3DV (2017)
Armeni, I., Sax, S., Zamir, A.R., Savarese, S.: Joint 2D–3D-semantic data for indoor scene understanding. arXiv preprint arXiv:1702.01105 (2017)
Jiang, H., Sheng, Z., Zhu, S., Dong, Z., Huang, R.: Unifuse: Unidirectional fusion for 360 panorama depth estimation. IEEE Robot. Autom. Lett. 6(2), 1519–1526 (2021)
Cheng, H.-T., Chao, C.-H., Dong, J.-D., Wen, H.-K., Liu, T.-L., Sun, M.: Cube padding for weakly-supervised saliency prediction in 360 videos. In: Proceedings CVPR, pp. 1420–1429 (2018)
Su, Y.-C., Grauman, K.: Learning spherical convolution for fast features from 360 imagery. Adv. NIPS 30, 529–539 (2017)
Tateno, K., Navab, N., Tombari, F.: Distortion-aware convolutional filters for dense prediction in panoramic images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 732–750. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_43
Yang, S., Song, Y., Kaess, M., Scherer, S.: Pop-up slam: Semantic monocular plane slam for low-texture environments. In: Proceedings IROS, pp. 1222–1229. IEEE (2016)
Wang, R., Geraghty, D., Matzen, K., Szeliski, R., Frahm, J.-M.: VPLNet: deep single view normal estimation with vanishing points and lines. In: Proceedings CVPR, pp. 689–698 (2020)
Lu, X., Yaoy, J., Li, H., Liu, Y., Zhang, X.: 2-line exhaustive searching for real-time vanishing point estimation in manhattan world. In: Proceedings WACV, pp. 345–353. IEEE (2017)
Yu, Z., Zheng, J., Lian, D., Zhou, Z., Gao, S.: Single-image piece-wise planar 3D reconstruction via associative embedding. In: Proceedings CVPR, pp. 1029–1037 (2019)
Shah, A., Kadam, E., Shah, H., Shinde, S., Shingade, S.: Deep residual networks with exponential linear unit. In: Proceedings of the Third International Symposium on Computer Vision and the Internet, pp. 59–65 (2016)
Liu, R., et al.: An intriguing failing of convolutional neural networks and the coordconv solution. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Monroy, R., Lutz, S., Chalasani, T., Smolic, A.: Salnet360: saliency maps for omni-directional images with CNN. Sig. Process. Image Commun. 69, 26–34 (2018)
Khasanova, R., Frossard, P.: Graph-based classification of omnidirectional images. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 869–878 (2017)
Sun, C., Sun, M., Chen, H.-T.: HoHoNet: 360 indoor holistic understanding with latent horizontal features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2573–2582 (2021)
Coughlan, J.M., Yuille, A.L.: Manhattan world: compass direction from a single image by bayesian inference. In: Proceedings of the seventh IEEE international conference on computer vision, vol. 2, pp. 941–947. IEEE (1999)
Zou, D., Wu, Y., Pei, L., Ling, H., Yu, W.: Structvio: visual-inertial odometry with structural regularity of man-made environments. IEEE Trans. Rob. 35(4), 999–1013 (2019)
Tulsiani, S., Tucker, R., Snavely, N.: Layer-structured 3D scene inference via view synthesis. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 311–327. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_19
Li, Y., Guo, Y., Yan, Z., Huang, X., Duan, Y., Ren, L.: Omnifusion: 360 monocular depth estimation via geometry-aware fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2801–2810 (2022)
Bai, J., Lai, S., Qin, H., Guo, J., Guo, Y.: GlpanoDepth: global-to-local panoramic depth estimation. arXiv preprint arXiv:2202.02796 (2022)
Area, M.R., Yuan, M., Richardt, C.: 360monodepth: high-resolution \(360^{\circ }\) monocular depth estimation. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kong, W., Zhang, Q., Yang, Y., Zhao, T., Wu, W., Wang, X. (2022). Self-supervised Indoor 360-Degree Depth Estimation via Structural Regularization. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol 13631. Springer, Cham. https://doi.org/10.1007/978-3-031-20868-3_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-20868-3_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20867-6
Online ISBN: 978-3-031-20868-3
eBook Packages: Computer ScienceComputer Science (R0)