Abstract
Object navigation tasks require an agent to find a target in an unknown environment based on its observations. Researchers employ various techniques, such as extracting high-level semantic information and building a memory network, to enhance the perception and understanding of the environment. However, these methods neglect the correlation between the representations of the current scene and the target description, as well as the relationships between perception and actions. In this paper, we propose a model that uses semantic features of the visual observation as input for navigation, which are represented in the modality similar to that of the target embedding. On this basis, we fuse the visual features and spatial masks with an Encoder-Decoder transformer structure to reflect the association between perception and actions. Furthermore, in the memory module, this paper integrates the representations of explored scenes and the target information for more direct guiding of impending navigation direction selection. Our method enables the agent to perceive the position of the target more quickly and execute accurate actions to approach it. Our method outperforms the state-of-the-art (SOTA) models in the AI2Thor environment with higher navigation success rate and better learning efficiency.
This research was supported partially by the National Natural Science Fund of China (Grant NO. 62306329) and the Natural Science Fund of Hunan Province (Grant NO. 2023JJ40676).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Anderson, P., Chang, A., Chaplot, D.S., et al.: On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757 (2018)
Chaplot, D.S., Gandhi, D., Gupta, S., et al.: Learning to explore using active neural SLAM. In: 8th International Conference on Learning Representations (ICLR 2020), Addis Ababa, 26–30 April 2020. OpenReview.net (2020)
Chen, S., Zhao, Q.: Attention to action: Leveraging attention for object navigation. In: The British Machine Vision Conference (2021)
Dang, R., Shi, Z., Wang, L., et al.: Unbiased directed object attention graph for object navigation. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 3617–3627 (2022)
Du, H., Yu, X., Zheng, L.: Learning object relation graph and tentative policy for visual navigation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 19–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_2
Du, H., Yu, X., Zheng, L.: Vtnet: visual transformer network for object goal navigation. arXiv preprint arXiv:2105.09447 (2021)
Fang, K., Toshev, A., Fei-Fei, L., et al.: Scene memory transformer for embodied agents in long-horizon tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 538–547 (2019)
Fukushima, R., Ota, K., Kanezaki, A., et al.: Object memory transformer for object goal navigation. In: 2022 International Conference on Robotics and Automation (ICRA), pp. 11288–11294. IEEE (2022)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kwon, O., Kim, N., Choi, Y., et al.: Visual graph memory with unsupervised representation for visual navigation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15890–15899 (2021)
Mayo, B., Hazan, T., Tal, A.: Visual navigation with spatial attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16898–16907 (2021)
Mousavian, A., Toshev, A., Fišer, M., et al.: Visual representations for semantic target driven navigation. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8846–8852. IEEE (2019)
Pal, A., Qiu, Y., Christensen, H.: Learning hierarchical relationships for object-goal navigation. In: Conference on Robot Learning, pp. 517–528. PMLR (2021)
Ren, S., He, K., Girshick, R., et al.: Faster r-cnn: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28 (2015)
Yang, W., Wang, X., Farhadi, A., et al.: Visual semantic navigation using scene priors. arXiv preprint arXiv:1810.06543 (2018)
Ye, J., Batra, D., Wijmans, E., et al.: Auxiliary tasks speed up learning point goal navigation. In: Conference on Robot Learning, pp. 498–516. PMLR (2021)
Zhang, S., Song, X., Bai, Y., et al.: Hierarchical object-to-zone graph for object navigation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15130–15140 (2021)
Zhang, S., Song, X., Li, W., et al.: Layout-based causal inference for object navigation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10792–10802 (2023)
Zhou, K., Guo, C., Zhang, H., et al.: Optimal graph transformer viterbi knowledge inference network for more successful visual navigation. Adv. Eng. Inform. 55, 101889 (2023)
Zhu, Y., Mottaghi, R., Kolve, E., et al.: Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3357–3364. IEEE (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, Y., Hu, Y., Wu, W., Liu, T., Peng, Y. (2024). ACT: Action-assoCiated and Target-Related Representations for Object Navigation. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14554. Springer, Cham. https://doi.org/10.1007/978-3-031-53305-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-53305-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53304-4
Online ISBN: 978-3-031-53305-1
eBook Packages: Computer ScienceComputer Science (R0)