Skip to main content
Log in

EMTNet: efficient mobile transformer network for real-time monocular depth estimation

  • Short Paper
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Estimating depth from a single image presents a formidable challenge due to the inherently ill-posed and ambiguous nature of deriving depth information from a 3D scene. Prior approaches to monocular depth estimation have mainly relied on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs) as the primary feature extraction methods. However, striking a balance between speed and accuracy for real-time tasks has proven to be a formidable hurdle with these methods. In this study, we proposed a new model called EMTNet, which extracts feature information from images at both local and global scales by combining CNN and ViT. To reduce the number of parameters, EMTNet introduces the mobile transformer block (MTB), which reuses parameters from self-attention. High-resolution depth maps are generated by fusing multi-scale features in the decoder. Through comprehensive validation on the NYU Depth V2 and KITTI datasets, the results demonstrate that EMTNet outperforms previous real-time monocular depth estimation models based on CNNs and hybrid architecture. In addition, we have done the corresponding generalizability tests and ablation experiments to verify our conjectures. The depth map output from EMTNet exhibits intricate details and attains a real-time frame rate of 32 FPS, achieving a harmonious balance between real-time and accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. Diaz, C., Walker, M., Szafir, D. A., and Szafir, D. (2017) Designing for depth perceptions in augmented reality. In: 2017 IEEE international symposium on mixed and augmented reality (ISMAR), pages 111–122. IEEE

  2. Kusupati, U., Cheng, S., Chen, R., and Su, H. (2020) Normal assisted stereo depth estimation. In: Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp 2189–2199

  3. Mancini M, Costante G, Valigi P, Ciarfuglia TA (2016) Fast robust monocular depth estimation for obstacle detection with fully convolutional networks. In: 2016 IEEE/rsj international conference on intelligent robots and systems (IROS), pp 4296–4303. IEEE

  4. Mur-Artal R, Montiel JMM, Tardós JD (2015) Orb-slam: a versatile and accurate monocular slam system. IEEE Trans Rob 31(5):1147–1163

    Article  Google Scholar 

  5. Ha H, Im S, Park J, Jeon H-G, Kweon IS (2016) High-quality depth from uncalibrated small motion clip. In: Proceedings of the IEEE conference on computer vision and pattern Recognition, pp 5413–5421

  6. Kong N, Black MJ (2015) Intrinsic depth: improving depth transfer with intrinsic images. In: Proceedings of the IEEE international conference on computer vision, pp 3514–3522

  7. Karsch K, Liu C, Kang SB (2016) Depth transfer: depth extraction from videos using nonparametric sampling. In: dense image correspondences for computer vision, pp 173–205. Springer

  8. Rajagopalan AN, Chaudhuri S, Mudenagudi U (2004) Depth estimation and image restoration using defocused stereo pairs. IEEE Trans Pattern Anal Mach Intell 26(11):1521–1525

    Article  Google Scholar 

  9. Eigen D, Fergus R (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE international conference on computer vision, pp 2650–2658

  10. Liu F, Shen C, Lin G (2015) Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5162–5170

  11. Porzi L, Bulo SR, Penate-Sanchez A, Ricci E, Moreno-Noguer F (2016) Learning depth-aware deep representations for robotic perception. IEEE Robotics and Autom Lett 2(2):468–475

    Article  Google Scholar 

  12. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Łukasz, Polosukhin I (2017) Attention is all you need. Advances in neural information processing systems, pp 30

  13. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929

  14. Bhat SF, Alhashim I, Wonka P (2021) Adabins: Depth estimation using adaptive bins. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4009–4018

  15. Li Z, Wang X, Liu X, Jiang J (2022) Binsformer: revisiting adaptive bins for monocular depth estimation. arXiv preprint arXiv:2204.00987

  16. Zhao C, Zhang Y, Poggi M, Tosi F, Guo X, Zhu Z, Huang G, Tang Y, Mattoccia S (2022) Monovit: self-supervised monocular depth estimation with a vision transformer. arXiv preprint arXiv:2208.03543

  17. Bae J, Moon S, Im S (2022) Deep digging into the generalization of self-supervised monocular depth estimation. arXiv preprint arXiv:2205.11083

  18. Li Z, Chen Z, Liu X, Jiang J (2022) Depthformer: exploiting long-range correlation and local information for accurate monocular depth estimation. arXiv preprint arXiv:2203.14211

  19. Shu C, Chen Z, Chen L, Ma K, Wang M, Ren H (2022) Sidert: A real-time pure transformer architecture for single image depth estimation. arXiv preprint arXiv:2204.13892

  20. Ma H, Xia X, Wang X, Xiao X, Li J, Zheng M (2022) MoCoViT: mobile convolutional vision transformer

  21. Ranftl R, Bochkovskiy A, Koltun V (2021) Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12179–12188

  22. Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from rgbd images. In: European conference on computer vision, pp 746–760. Springer

  23. Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the kitti dataset. Int J Robotics Res 32(11):1231–1237

    Article  Google Scholar 

  24. Saxena A, Chung S, Ng A (2005) Learning depth from single monocular images. Advances in neural information processing systems, 18

  25. Karsch K, Liu C, Kang SB (2019) Depth extraction from video using non-parametric sampling. arXiv preprint arXiv:2002.04479

  26. Konrad J, Wang M, Ishwar P (2012) 2d-to-3d image conversion by learning depth from examples. In: 2012 IEEE Computer society conference on computer vision and pattern recognition workshops, pp 16–22. IEEE

  27. Karsch K, Liu C, Kang SB (2014) Depth transfer: depth extraction from video using non-parametric sampling. IEEE Trans Pattern Anal Mach Intell 36(11):2144–2158

    Article  Google Scholar 

  28. Liu M, Salzmann M, He X (2014) Discrete-continuous depth estimation from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 716–723

  29. Eigen D, Puhrsch C, Fergus R (2014) Depth map prediction from a single image using a multi-scale deep network. Advances in neural information processing systems, 27

  30. Laina I, Rupprecht C, Belagiannis V, Tombari F, Navab N (2016) Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth international conference on 3D vision (3DV), pp 239–248. IEEE

  31. Fu H, Gong M, Wang C, Batmanghelich K, Tao D (2018) Deep ordinal regression network for monocular depth estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2002–2011

  32. Liu F, Shen C, Lin G, Reid I (2015) Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans Pattern Anal Mach Intell 38(10):2024–2039

    Article  Google Scholar 

  33. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778

  34. Qi X, Liao R, Liu Z, Urtasun R, Jia J (2018) Geonet: geometric neural network for joint depth and surface normal estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 283–291

  35. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H (2021) Training data-efficient image transformers & distillation through attention. In: International conference on machine learning, pp 10347–10357. PMLR

  36. Wang W, Xie E, Li X, Fan D-P, Song K, Liang D, Lu T, Luo P, Shao L (2021) Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 568–578

  37. Han K, Xiao A, Enhua W, Guo J, Chunjing X, Wang Y (2021) Transformer in transformer. Adv Neural Inf Process Syst 34:15908–15919

    Google Scholar 

  38. Xu W, Xu Y, Chang T, Tu Z (2021) Co-scale conv-attentional image transformers. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9981–9990

  39. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012–10022

  40. Dalmaz O, Yurt M, Çukur T (2021) Resvit: residual vision transformers for multimodal medical image synthesis. IEEE Trans Med Imaging 41:2598–2614

    Article  Google Scholar 

  41. Mehta S, Rastegari M (2021) Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. arXiv:2110.02178

  42. Li Z, Li Y, Li Q, Zhang Y, Wang P, Guo D, Lu L, Jin D, Hong Q (2022) Lvit: Language meets vision transformer in medical image segmentation. arXiv:2206.14718

  43. Wu K, Zhang J, Peng H, Liu M, Xiao B, Fu J, Yuan L (2022) Tinyvit: fast pretraining distillation for small vision transformers. arXiv:2207.10666

  44. Dai Z, Liu H, Le QV, Tan M (2021) Coatnet: marrying convolution and attention for all data sizes. Adv Neural Inf Process Syst 34:3965–3977

    Google Scholar 

  45. Srinivas A, Lin T-Y, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 16519–16529

  46. d’Ascoli S, Touvron H, Leavitt ML, Morcos AS, Biroli G, Sagun L (2021) Convit: improving vision transformers with soft convolutional inductive biases. In: International Conference on Machine Learning, pp 2286–2296. PMLR

  47. Wu H, Xiao B, Codella N, Liu M, Dai X, Yuan L, Zhang L (2021) Cvt: introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 22–31

  48. Graham B, El-Nouby A, Touvron H, Stock P, Joulin A, Jégou H, Douze M (2021) Levit: a vision transformer in convnet’s clothing for faster inference. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 12259–12269

  49. LeCun Y, Boser B, Denker J, Henderson D, Howard R, Hubbard W, Jackel L (1989) Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, 2

  50. Han K, Wang Y, Tian Q, Guo J, Xu C, Xu C (2020) Ghostnet: more features from cheap operations. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 1577–1586

  51. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  52. Lee JH, Han M-K, Ko DW, Suh IH (2019) From big to small: multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326

  53. Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vision 88(2):303–338

    Article  Google Scholar 

  54. Smith LN, Nicholay T (2019) Super-convergence: very fast training of neural networks using large learning rates. In: Tien Pham (ed) Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications. International Society for Optics and Photonics, SPIE, vol 11006, pp 1100612

  55. Nekrasov V, Dharmasiri T, Spek A, Drummond T, Shen C, Reid ID (2018) Real-time joint semantic segmentation and depth estimation using asymmetric annotations. In: 2019 international conference on robotics and automation (ICRA), pp 7101–7107

  56. Wofk D, Ma F, Yang T-J, Karaman S, Sze V (2019) Fastdepth: fast monocular depth estimation on embedded systems. In: 2019 international conference on robotics and automation (ICRA), IEEE, pp 6101–6108.

  57. Spek A, Dharmasiri T, Drummond T (2018) Cream: condensed real-time models for depth prediction using convolutional neural networks. 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 540–547

  58. Wang L, Famouri M, Wong A (2020) Depthnet nano: a highly compact self-normalizing neural network for monocular depth estimation. arXiv:2004.08008

  59. Ma F, Karaman S (2017) Sparse-to-dense: depth prediction from sparse depth samples and a single image. 2018 IEEE international conference on robotics and automation (ICRA), pp 1–8

  60. Yucel MK, Dimaridou V, Drosou A, Saà-Garriga A (2021) Real-time monocular depth estimation with sparse supervision on mobile. 2021 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp 2428–2437

  61. An S, Zhou F, Yang M, Zhu H, Fu C, Tsintotas KA (2021) Real-time monocular human depth estimation and segmentation on embedded systems. 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 55–62

  62. Atapour-Abarghouei A, Breckon TP (2018) Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 2800–2810

  63. Klingner M, Termöhlen J-A, Mikolajczyk J, Fingscheidt T (2020) Self-supervised monocular depth estimation: solving the dynamic object problem by semantic guidance. In: european conference on computer vision

  64. Liu J, Li Q, Cao R, Tang W, Qiu G (2020) MiniNet: an extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation. ISPRS J Photogrammetry Remote Sens 166:255–267

    Article  Google Scholar 

  65. Bae J-H, Moon S, Im S (2022) Deep digging into the generalization of self-supervised monocular depth estimation. Proceedings of the AAAI conference on artificial intelligence

  66. Zhang N, Nex F, Vosselman G, Kerle N (2022) Lite-mono: a lightweight cnn and transformer architecture for self-supervised monocular depth estimation. arXiv:2211.13202

  67. Varma A, Chawla H, Zonooz B, Arani E (2022) Transformers in self-supervised monocular depth estimation with unknown camera intrinsics. arXiv:2202.03131

  68. Bhat SF, Alhashim I, Wonka P (2021) Adabins: depth estimation using adaptive bins. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4009–4018

  69. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Long Yan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, L., Yu, F. & Dong, C. EMTNet: efficient mobile transformer network for real-time monocular depth estimation. Pattern Anal Applic 26, 1833–1846 (2023). https://doi.org/10.1007/s10044-023-01205-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-023-01205-4

Keywords

Navigation