Abstract
In this article, we explore the integration of multimodal data into monocular depth estimation. Monocular depth estimation is performed by fusing RGB data with sparse radar data. Since the existing fusion method does not take into account the correlation between the two types of data in the channel and in space, it lacks the representation of the global information relationship on the channel and in space. Therefore, we propose a feature fusion module (DAF) based on the dual attention mechanism. The dual attention fusion module improves the global information representation capability of the model by modeling the dynamic and non-linear relationship of the two kinds of data in the channel and space, adaptively recalibrates the response to each feature, and maximizes the use of radar data. At the same time, DAF can reduce noise interference in radar data by weighting features, avoiding the loss of secondary details caused by filtering operations, and alleviating the problem of excessive noise in radar data. Finally, due to the influence of the complex weather environment and the model itself, it is difficult for the model to obtain an effective feature representation in the complex weather environment. Therefore, we introduced a batch loss function to enable the model to focus on feature extraction in a complex environment, so as to obtain a more accurate representation of feature information. It reduces model errors and speeds up the convergence of the model. The experiment was conducted on the recently released nuScenes dataset, which provides data records of the entire sensor suite of autonomous vehicles. Experiments prove that our method is superior to other fusion methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Eigen, D., Puhrsch, C., Fergus, R.: Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. MIT Press (2014)
Eigen, D., David, Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE international conference on computer vision, pp. 2650–2658 (2015)
Laina, I., et al.: Deeper depth prediction with fully convolutional residual networks. In: Fourth International Conference on 3d Vision. IEEE (2016)
Fu, H., et al.: Deep Ordinal Regression Network for Monocular Depth Estimation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE (2018)
Zheng, C., Cham, T.J., Cai, J.: T2Net: synthetic-to-realistic translation for solving single-image depth estimation tasks. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Ji, R., et al.: Semi-supervised adversarial monocular depth estimation. IEEE Trans. Pattern Anal. Mach. Intell. 1–1 (2019)
Godard, C., Aodha, O.M., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. Compu. Vis. Pattern Recognit. (2017)
Zhou, T., et al.": Unsupervised learning of depth and ego-motion from video. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2017)
Jiao, J., Cao, Y., Song, Y., Lau, R.: Look deeper into depth: monocular depth estimation with semantic booster and attention-driven loss. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 55–71. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_4
Zhu, S., Brazil, G., Liu, X.: The edge of depth: explicit constraints between segmentation and depth. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)
Srinivasan, P.P., et al.: Aperture supervision for monocular depth estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Mal, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: IEEE International Conference on Robotics and Automation (ICRA) (2018)
Ma, F., Cavalheiro, G.V., Karaman, S.: Self-supervised sparse-to-dense: self-supervised depth completion from lidar and monocular camera. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE (2019)
Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision (3DV). IEEE (2018)
Lin, J.-T., Dai, D., Gool, L.V.: Depth estimation from monocular images and sparse radar data. In: IEEE International Conference on Intelligent Robots and Systems (IROS) (2020)
Lo, C.-C., Vandewalle, P.: Depth estimation from monocular images and sparse radar using deep ordinal regression network. In: International Conference on Image Processing (ICIP) (2021)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Dan, X., et al.: Structured attention guided convolutional neural fields for monocular depth estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2018)
Li, R., et al.: Deep attention-based classification network for robust depth prediction. In: Asian Conference on Computer Vision. Springer, Cham (2018)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 2017
Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Liu, Z, et al.: Swin transformer: hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030 (2021)
Ranftl, R, Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)
Pei, M.: MSFNet:Multi-scale features network for monocular depth estimation. arXiv preprint arXiv:2107.06445 (2021)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural. Inf. Process. Syst. 32, 8026–8037 (2019)
Siddiqui, S.A., Vierling, A, Berns K.: Multi-modal depth estimation using convolutional neural networks. In: 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 354–359 (2020)
Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Woo, S., et al.: CBAM: convolutional block attention module. Eur. Conf. Comput. Vis. (2018)
Jie, H., et al.: Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 99 (2017)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional neural networks. In: European Conference on Computer Vision (2013)
He, K., Zhang, X., Ren, S., Sun J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2016)
Funding
This work is supported by the Key Research and Development Program of Hunan Province (No.2019SK2161) and the Key Research and Development Program of Hunan Province (No.2016SK2017).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Long, J., Huang, J., Wang, S. (2022). Radar Fusion Monocular Depth Estimation Based on Dual Attention. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13338. Springer, Cham. https://doi.org/10.1007/978-3-031-06794-5_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-06794-5_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06793-8
Online ISBN: 978-3-031-06794-5
eBook Packages: Computer ScienceComputer Science (R0)