skip to main content
10.1145/3462648.3462654acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrobceConference Proceedingsconference-collections
research-article

Moving Target Tracking with a Mobile Robot based on Modified Social Force Model

Published:22 June 2021Publication History

ABSTRACT

A target tracking approach is proposed for mobile robots in this paper to address the human-robot coexistence and collaboration problem. The improved social force model (SFM) is applied to improve the tracking performance of the robot in crowded environments. When the robot approaches the pedestrians or obstacles, the tracking strategy is adaptively adjusted to avoid collision. The inverse reinforcement learning (IRL) is used to learn the parameters of the improved SFM, where the training data for the IRL is collected in real-world scenes. An effective criterion is designed to evaluate the tracking performance, which fully considers the relationship between the robot and surrounding environments. The experimental results demonstrate the effectiveness of the proposed target tracking method.

References

  1. J. Mainprice, R. Hayne, and D. Berenson. 2016. Goal Set Inverse Optimal Control and Iterative Replanning for Predicting Human Reaching Motions in Shared Workspaces. IEEE Transactions on Robotics 32, 4 (2016), 897–908. https://doi.org/10.1109/TRO.2016.2581216Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Ewerton, G. Neumann, R. Lioutikov, H. Ben Amor, J. Peters, and G. Maeda. 2015. Learning multiple collaborative tasks with a mixture of Interaction Primitives. In 2015 IEEE International Conference on Robotics and Automation (ICRA). 1535–1542. https://doi.org/10.1109/ICRA.2015.7139393Google ScholarGoogle ScholarCross RefCross Ref
  3. A. D. Wilson and A. F. Bobick. 1999. Parametric hidden Markov models for gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 9(1999), 884–900. https://doi.org/10.1109/34.790429Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Mainprice and D. Berenson. 2013. Human-robot collaborative manipulation planning using early prediction of human motion. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 299–306. https://doi.org/10.1109/IROS.2013.6696368Google ScholarGoogle ScholarCross RefCross Ref
  5. L. Bretzner, I. Laptev, and T. Lindeberg. 2002. Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. In Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition. 423–428. https://doi.org/10.1109/AFGR.2002.1004190Google ScholarGoogle ScholarCross RefCross Ref
  6. Dana Kulić, Christian Ott, Dongheui Lee, Junichi Ishikawa, and Yoshihiko Nakamura. 2012. Incremental learning of full body motion primitives and their sequencing through human motion observation. The International Journal of Robotics Research 31, 3 (2012), 330–345. https://doi.org/10.1177/0278364911426178Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. S. Koppula and A. Saxena. 2016. Anticipating Human Activities Using Object Affordances for Reactive Robotic Response. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 1(2016), 14–29. https://doi.org/10.1109/TPAMI.2015.2430335Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Yun Jiang and Ashutosh Saxena. 2014. Modeling High-Dimensional Humans for Activity Anticipation using Gaussian Process Latent CRFs, In Robotics: Science and Systems. IEEE Transactions on Pattern Analysis and Machine Intelligence.Google ScholarGoogle Scholar
  9. C. Huang and B. Mutlu. 2016. Anticipatory robot control for efficient human-robot collaboration. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 83–90. https://doi.org/10.1109/HRI.2016.7451737Google ScholarGoogle Scholar
  10. Hema S. Koppula, Ashesh Jain, and Ashutosh Saxena. 2016. Anticipatory Planning for Human-Robot Teams. Springer International Publishing, Cham, 453–470. https://doi.org/10.1007/978-3-319-23778-7_30Google ScholarGoogle Scholar
  11. Gonzalo Ferrer and Alberto Sanfeliu. 2014. Bayesian Human Motion Intentionality Prediction in urban environments. Pattern Recognition Letters 44 (2014), 134 – 140. https://doi.org/10.1016/j.patrec.2013.08.013 Pattern Recognition and Crowd Analysis.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Dirk Helbing and Peter Molnar. 1998. Social Force Model for Pedestrian Dynamics. Physical Review E 51 (05 1998). https://doi.org/10.1103/PhysRevE.51.4282Google ScholarGoogle Scholar
  13. Dirk Helbing, Illés Farkas, and Tamás Vicsek. 2000. Simulating dynamical features of escape panic. Nature 407, 6803 (01 Sep 2000), 487–490. https://doi.org/10.1038/35035023Google ScholarGoogle Scholar
  14. M. Luber, J. A. Stork, G. D. Tipaldi, and K. O. Arras. 2010. People tracking with human motion predictions from social forces. In 2010 IEEE International Conference on Robotics and Automation. 464–469. https://doi.org/10.1109/ROBOT.2010.5509779Google ScholarGoogle ScholarCross RefCross Ref
  15. F. Zanlungo, T. Ikeda, and T. Kanda. 2011. Social force model with explicit collision prediction. EPL (Europhysics Letters) 93, 6 (mar 2011), 68005. https://doi.org/10.1209/0295-5075/93/68005Google ScholarGoogle ScholarCross RefCross Ref
  16. Tetsushi Ikeda, Yoshihiro Chigodo, Daniel Rea, Francesco Zanlungo, Masahiro Shiomi, and Takayuki Kanda. 2012. Modeling and Prediction of Pedestrian Behavior based on the Sub-goal Concept. In Proceedings of Robotics: Science and Systems. Sydney, Australia. https://doi.org/10.15607/RSS.2012.VIII.018Google ScholarGoogle ScholarCross RefCross Ref
  17. R. Mehran, A. Oyama, and M. Shah. 2009. Abnormal crowd behavior detection using social force model. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 935–942. https://doi.org/10.1109/CVPR.2009.5206641Google ScholarGoogle ScholarCross RefCross Ref
  18. C. Wang, Y. Li, S. S. Ge, and T. H. Lee. 2016. Adaptive control for robot navigation in human environments based on social force model. In 2016 IEEE International Conference on Robotics and Automation (ICRA). 5690–5695. https://doi.org/10.1109/ICRA.2016.7487791Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Edward Twitchell Hall. 1990. The hidden dimension. Anchor Books.Google ScholarGoogle Scholar
  20. J. Yuan, H. Chen, F. Sun, and Y. Huang. 2015. Multisensor Information Fusion for People Tracking With a Mobile Robot: A Particle Filtering Approach. IEEE Transactions on Instrumentation and Measurement 64, 9(2015), 2427–2442. https://doi.org/10.1109/TIM.2015.2407512Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    RobCE '21: Proceedings of the 2021 International Conference on Robotics and Control Engineering
    April 2021
    97 pages
    ISBN:9781450389471
    DOI:10.1145/3462648

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 22 June 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format