Abstract
We introduce the task of 3D visual grounding in large-scale dynamic scenes based on natural linguistic descriptions and online captured multi-modal visual data, including 2D images and 3D LiDAR point clouds. We present a novel method, dubbed WildRefer, for this task by fully utilizing the rich appearance information in images, the position and geometric clues in point cloud as well as the semantic knowledge of language descriptions. Besides, we propose two novel datasets, i.e., STRefer and LifeRefer, which focus on large-scale human-centric daily-life scenarios accompanied with abundant 3D object and natural language annotations. Our datasets are significant for the research of 3D visual grounding in the wild and has huge potential to boost the development of autonomous driving and service robots. Extensive experiments and ablation studies demonstrate that our method achieves state-of-the-art performance on the proposed benchmarks. The code is provided in https://github.com/4DVLab/WildRefer.
This work was supported by NSFC (No. 62206173), Natural Science Foundation of Shanghai (No. 22dz1201900), Shanghai Sailing Program (No. 22YF1428700), MoE Key Laboratory of Intelligent Perception and Human-Machine Collaboration (ShanghaiTech University), Shanghai Frontiers Science Center of Human-centered Artificial Intelligence (ShangHAI).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Achlioptas, P., Abdelreheem, A., Xia, F., Elhoseiny, M., Guibas, L.: ReferIt3D: neural listeners for fine-grained 3D object identification in real-world scenes. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 422–440. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_25
Bai, X., et al.: Transfusion: robust lidar-camera fusion for 3d object detection with transformers. In: CVPR, pp. 1090–1099 (2022)
Behley, J., et al.: Semantickitti: a dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9297–9307 (2019)
Caesar, H., et al.: nuscenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)
Cai, D., Zhao, L., Zhang, J., Sheng, L., Xu, D.: 3djcg: a unified framework for joint dense captioning and visual grounding on 3d point clouds. In: CVPR, pp. 16464–16473 (2022)
Chen, D.Z., Chang, A.X., Nießner, M.: ScanRefer: 3D object localization in RGB-D scans using natural language. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12365, pp. 202–221. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58565-5_13
Chen, J., Luo, W., Wei, X., Ma, L., Zhang, W.: Ham: hierarchical attention model with high performance for 3d visual grounding. arXiv preprint arXiv:2210.12513 (2022)
Chen, X., Zhang, T., Wang, Y., Wang, Y., Zhao, H.: Futr3d: a unified sensor fusion framework for 3d detection. arXiv preprint arXiv:2203.10642 (2022)
Chen, Z., Hu, R., Chen, X., Nießner, M., Chang, A.X.: Unit3d: a unified transformer for 3d dense captioning and visual grounding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 18109–18119 (2023)
Cho, J., Lei, J., Tan, H., Bansal, M.: Unifying vision-and-language tasks via text generation. In: ICML, pp. 1931–1942. PMLR (2021)
Cong, P., et al.: Weakly supervised 3d multi-person pose estimation for large-scale scenes based on monocular camera and single lidar. arXiv preprint arXiv:2211.16951 (2022)
Cong, P., et al.: Stcrowd: a multimodal dataset for pedestrian perception in crowded scenes. In: CVPR (2022)
De Vries, H., Strub, F., Chandar, S., Pietquin, O., Larochelle, H., Courville, A.: Guesswhat?! visual object discovery through multi-modal dialogue, pp. 5503–5512 (2017)
Deng, J., Yang, Z., Chen, T., Zhou, W., Li, H.: Transvg: end-to-end visual grounding with transformers. In: ICCV, pp. 1769–1779 (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Feng, M., Li, Z., et al.: Free-form description guided 3d visual graph network for object grounding in point cloud. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3722–3731 (2021)
Han, X., Cong, P., Xu, L., Wang, J., Yu, J., Ma, Y.: Licamgait: gait recognition in the wild by using lidar and camera multi-modal visual sensors. arXiv preprint arXiv:2211.12371 (2022)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Hu, R., Rohrbach, M., et al.: Modeling relationships in referential expressions with compositional modular networks. In: CVPR, pp. 1115–1124 (2017)
Huang, J., Qin, Y., Qi, J., Sun, Q., Zhang, H.: Deconfounded visual grounding. arXiv preprint arXiv:2112.15324 (2021)
Huang, S., Chen, Y., Jia, J., Wang, L.: Multi-view transformer for 3d visual grounding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15524–15533 (2022)
Jain, A., Gkanatsios, N., Mediratta, I., Fragkiadaki, K.: Bottom up top down detection transformers for language grounding in images and point clouds. CoRR arxiv:2112.08879 (2021)
Kamath, A., Singh, M., et al: Mdetr-modulated detection for end-to-end multi-modal understanding. In: ICCV, pp. 1780–1790 (2021)
Kolmet, M., Zhou, Q., Ošep, A., Leal-Taixé, L.: Text2pos: text-to-point-cloud cross-modal localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6687–6696 (2022)
Krishna, R., et al.: Visual genome: connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vision 123(1), 32–73 (2017)
Li, Y., et al.: Deepfusion: lidar-camera deep fusion for multi-modal 3d object detection. In: CVPR, pp. 17182–17191 (2022)
Liang, T., et al.: Bevfusion: a simple and robust lidar-camera fusion framework. arXiv preprint arXiv:2205.13790 (2022)
Liao, Y., et al.: A real-time cross-modality correlation filtering method for referring expression comprehension. In: CVPR, pp. 10880–10889 (2020)
Liu, H., Lin, A., Han, X., Yang, L., Yu, Y., Cui, S.: Refer-it-in-rgbd: a bottom-up approach for 3d visual grounding in rgbd images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6032–6041 (2021)
Liu, R., Liu, C., Bai, Y., Yuille, A.L.: Clevr-ref+: diagnosing visual reasoning with referring expressions. In: CVPR, pp. 4185–4194 (2019)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Lu, J., Batra, D., Parikh, D., Lee, S.: Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/c74d97b01eae257e44aa9d5bade97baf-Paper.pdf
Luo, J., et al.: 3d-sps: single-stage 3d visual grounding via referred point progressive selection. In: CVPR, pp. 16454–16463 (2022)
Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A.L., Murphy, K.: Generation and comprehension of unambiguous object descriptions. In: CVPR, pp. 11–20 (2016)
Nagaraja, V.K., Morariu, V.I., Davis, L.S.: Modeling context between objects for referring expression understanding. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 792–807. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_48
Ott, M., et al.: fairseq: a fast, extensible toolkit for sequence modeling. In: Proceedings of NAACL-HLT 2019: Demonstrations (2019)
Qi, C.R., Chen, X., Litany, O., Guibas, L.J.: Imvotenet: boosting 3d object detection in point clouds with image votes. In: CVPR, pp. 4404–4413 (2020)
Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3d object detection from rgb-d data. In: CVPR, pp. 918–927 (2018)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 30 (2017)
Rohrbach, A., Rohrbach, M., Tang, S., Joon Oh, S., Schiele, B.: Generating descriptions with grounded and co-referenced people. In: CVPR, pp. 4979–4989 (2017)
Sindagi, V.A., Zhou, Y., Tuzel, O.: Mvx-net: multimodal voxelnet for 3d object detection. In: ICRA, pp. 7276–7282. IEEE (2019)
Su, W., Zhu, X., etc.: Vl-bert: pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530 (2019)
Vaswani, A., et al.: Attention is all you need. ArXiv arxiv:1706.03762 (2017)
Vora, S., Lang, e.: Pointpainting: sequential fusion for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4604–4612 (2020)
Wang, P., Wu, Q., Cao, J., Shen, C., Gao, L., Hengel, A.V.D.: Neighbourhood watch: referring expression comprehension via language-guided graph attention networks. In: CVPR, pp. 1960–1968 (2019)
Wu, Y., Cheng, X., Zhang, R., Cheng, Z., Zhang, J.: Eda: explicit text-decoupling and dense alignment for 3d visual grounding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19231–19242 (2023)
Xu, Y., et al.: Human-centric scene understanding for 3d large-scale scenarios. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 20349–20359 (2023)
Yan, X., et al.: 2dpass: 2d priors assisted semantic segmentation on lidar point clouds. In: ECCV 2022, pp. 677–695. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-19815-1_39
Yang, S., Li, G., Yu, Y.: Dynamic graph attention for referring expression comprehension. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4644–4653 (2019)
Yang, Z., Chen, T., Wang, L., Luo, J.: Improving one-stage visual grounding by recursive sub-query construction. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 387–404. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_23
Yang, Z., et al.: A fast and accurate one-stage approach to visual grounding. In: ICCV (2019)
Yu, L., et al.: Mattnet: modular attention network for referring expression comprehension. In: CVPR (2018)
Yu, L., Poirson, P., Yang, S., Berg, A.C., Berg, T.L.: Modeling context in referring expressions. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 69–85. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_5
Yuan, Z., Yan, X., et al.: Instancerefer: cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1791–1800 (2021)
Zhang, Y., Gong, Z., Chang, A.X.: Multi3drefer: grounding text description to multiple 3d objects. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15225–15236 (2023)
Zhao, L., Cai, D., Sheng, L., Xu, D.: 3dvg-transformer: relation modeling for visual grounding on point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2928–2937 (2021)
Zhou, L., Kalantidis, Y., et al.: Grounded video description. In: CVPR, pp. 6578–6587 (2019)
Zhu, C., et al.: Seqtr: a simple yet universal network for visual grounding. arXiv preprint arXiv:2203.16265 (2022)
Zhu, X., et al.: Cylindrical and asymmetrical 3d convolution networks for lidar-based perception. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6807–6822 (2021)
Zhu, Z., Ma, X., Chen, Y., Deng, Z., Huang, S., Li, Q.: 3d-vista: pre-trained transformer for 3d vision and text alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2911–2921 (2023)
Zhuang, Z., Li, R., Jia, K., Wang, Q., Li, Y., Tan, M.: Perception-aware multi-sensor fusion for 3d lidar semantic segmentation. In: ICCV, pp. 16280–16290 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lin, Z. et al. (2025). WildRefer: 3D Object Localization in Large-Scale Dynamic Scenes with Multi-modal Visual Data and Natural Language. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15104. Springer, Cham. https://doi.org/10.1007/978-3-031-72952-2_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-72952-2_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72951-5
Online ISBN: 978-3-031-72952-2
eBook Packages: Computer ScienceComputer Science (R0)