Skip to main content

Self-distilled Self-supervised Depth Estimation in Monocular Videos

  • Conference paper
  • First Online:
Book cover Pattern Recognition and Artificial Intelligence (ICPRAI 2022)

Abstract

In this work, we investigate approaches to leverage self-distillation via predictions consistency on self-supervised monocular depth estimation models. Since per-pixel depth predictions are not equally accurate, we propose a mechanism to filter out unreliable predictions. Moreover, we study representative strategies to enforce consistency between predictions. Our results show that choosing proper filtering and consistency enforcement approaches are key to obtain larger improvements on monocular depth estimation. Our method achieves competitive performance on the KITTI benchmark.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Athiwaratkun, B., Finzi, M., Izmailov, P., Wilson, A.G.: There are many consistent explanations of unlabeled data: why you should average. In: International Conference on Learning Representations (2019)

    Google Scholar 

  2. Casser, V., Pirk, S., Mahjourian, R., Angelova, A.: Depth prediction without the sensors: leveraging structure for unsupervised learning from monocular videos. In: AAAI Conference on Artificial Intelligence, vol. 33, pp. 8001–8008 (2019)

    Google Scholar 

  3. Chen, Y., Schmid, C., Sminchisescu, C.: Self-supervised learning with geometric constraints in monocular video: connecting flow, depth, and camera. In: IEEE International Conference on Computer Vision, pp. 7063–7072 (2019)

    Google Scholar 

  4. Cho, J., Min, D., Kim, Y., Sohn, K.: Deep monocular depth estimation leveraging a large-scale outdoor stereo dataset. Expert Syst. Appl. 178, 114877 (2021)

    Article  Google Scholar 

  5. Choi, H., et al.: Adaptive confidence thresholding for monocular depth estimation. In: IEEE/CVF International Conference on Computer Vision, pp. 12808–12818 (2021)

    Google Scholar 

  6. Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014)

    Google Scholar 

  7. Fang, Z., Chen, X., Chen, Y., Gool, L.V.: Towards good practice for CNN-based monocular depth estimation. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1091–1100 (2020)

    Google Scholar 

  8. Furlanello, T., Lipton, Z., Tschannen, M., Itti, L., Anandkumar, A.: Born again neural networks. In: International Conference on Machine Learning, pp. 1607–1616. PMLR (2018)

    Google Scholar 

  9. Garg, R., Bg, V.K., Carneiro, G., Reid, I.: Unsupervised CNN for single view depth estimation: geometry to the rescue. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 740–756. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_45

    Chapter  Google Scholar 

  10. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  11. Godard, C., Mac Aodha, O., Firman, M., Brostow, G.J.: Digging into self-supervised monocular depth prediction. In: International Conference on Computer Vision, October 2019

    Google Scholar 

  12. Gordon, A., Li, H., Jonschkowski, R., Angelova, A.: Depth from videos in the wild: unsupervised monocular depth learning from unknown cameras. arXiv preprint arXiv:1904.04998 (2019)

  13. Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., Wilson, A.G.: Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407 (2018)

  14. Kaushik, V., Jindgar, K., Lall, B.: ADAADepth: adapting data augmentation and attention for self-supervised monocular depth estimation. arXiv preprint arXiv:2103.00853 (2021)

  15. Liu, L., Song, X., Wang, M., Liu, Y., Zhang, L.: Self-supervised monocular depth estimation for all day images using domain separation. In: IEEE/CVF International Conference on Computer Vision, pp. 12737–12746 (2021)

    Google Scholar 

  16. Mahjourian, R., Wicke, M., Angelova, A.: Unsupervised learning of depth and ego-motion from monocular video using 3D geometric constraints. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5667–5675 (2018)

    Google Scholar 

  17. Mendoza, J., Pedrini, H.: Adaptive self-supervised depth estimation in monocular videos. In: Peng, Y., Hu, S.-M., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds.) ICIG 2021. LNCS, vol. 12890, pp. 687–699. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87361-5_56

    Chapter  Google Scholar 

  18. Mobahi, H., Farajtabar, M., Bartlett, P.: Self-distillation amplifies regularization in Hilbert space. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 3351–3361. Curran Associates, Inc. (2020)

    Google Scholar 

  19. Peng, R., Wang, R., Lai, Y., Tang, L., Cai, Y.: Excavating the potential capacity of self-supervised monocular depth estimation. In: IEEE International Conference on Computer Vision (2021)

    Google Scholar 

  20. Sajjadi, M., Javanmardi, M., Tasdizen, T.: Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In: Advances in Neural Information Processing Systems, vol. 29, pp. 1163–1171 (2016)

    Google Scholar 

  21. Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  22. Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)

    Google Scholar 

  23. Tonioni, A., Poggi, M., Mattoccia, S., Di Stefano, L.: Unsupervised domain adaptation for depth prediction from images. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2396–2409 (2019)

    Article  Google Scholar 

  24. Xie, Q., Dai, Z., Hovy, E., Luong, T., Le, Q.: Unsupervised data augmentation for consistency training. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  25. Xu, H., et al.: Digging into uncertainty in self-supervised multi-view stereo. In: IEEE/CVF International Conference on Computer Vision, pp. 6078–6087 (2021)

    Google Scholar 

  26. Xu, T.B., Liu, C.L.: Data-distortion guided self-distillation for deep neural networks. In: AAAI Conference on Artificial Intelligence, vol. 33, pp. 5565–5572 (2019)

    Google Scholar 

  27. Yang, C., Xie, L., Su, C., Yuille, A.L.: Snapshot distillation: teacher-student optimization in one generation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2859–2868 (2019)

    Google Scholar 

  28. Yang, J., Alvarez, J.M., Liu, M.: Self-supervised learning of depth inference for multi-view stereo. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7526–7534 (2021)

    Google Scholar 

  29. Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helio Pedrini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mendoza, J., Pedrini, H. (2022). Self-distilled Self-supervised Depth Estimation in Monocular Videos. In: El Yacoubi, M., Granger, E., Yuen, P.C., Pal, U., Vincent, N. (eds) Pattern Recognition and Artificial Intelligence. ICPRAI 2022. Lecture Notes in Computer Science, vol 13363. Springer, Cham. https://doi.org/10.1007/978-3-031-09037-0_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09037-0_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09036-3

  • Online ISBN: 978-3-031-09037-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics