Skip to main content

Primitive Action Recognition Based on Semantic Facts

  • Conference paper
  • First Online:
Social Robotics (ICSR 2023)

Abstract

To interact with humans, a robot has to know actions done by each agent presents in the environment, robotic or not. Robots are not omniscient and can’t perceive every actions made but, as humans do, we can equip the robot with the ability to infer what happens from the perceived effects of these actions on the environment.

In this paper, we present a lightweight and open-source framework to recognise primitive actions and their parameters. Based on a semantic abstraction of changes in the environment, it allows to recognise unperceived actions. In addition, thanks to its integration into a cognitive robotic architecture implementing perspective-taking and theory of mind, the presented framework is able to estimate the actions recognised by the agent interacting with the robot. These recognition processes are refined on the fly based on the current observations. Tests on real robots demonstrate the framework’s usability in interactive contexts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    ROSbags: https://gitlab.laas.fr/avigne/action_recognition_dataset.

  2. 2.

    The agent is not perceived because it has not been equipped to do this.

  3. 3.

    Video: https://youtu.be/cwLLEAA_mCY.

  4. 4.

    https://github.com/vigne-laas/Procedural.

References

  1. Aggarwal, J.K., Cai, Q., Liao, W., Sabata, B.: Nonrigid motion analysis: articulated and elastic motion. Comput. Vis. Image Underst. 70(2), 142–156 (1998)

    Article  Google Scholar 

  2. Al-Faris, M., Chiverton, J., Ndzi, D., Ahmed, A.I.: A review on computer vision-based methods for human action recognition. J. Imaging 6, 46 (2020)

    Article  Google Scholar 

  3. Díaz-Rodríguez, N., Cadahía, O.L., Cuéllar, M.P., Lilius, J., Calvo-Flores, M.D.: Handling real-world context awareness, uncertainty and vagueness in real-time human activity tracking and recognition with a fuzzy ontology-based hybrid method. Sensors 14(10), 18131–18171 (2014)

    Article  Google Scholar 

  4. Helaoui, R., Riboni, D., Stuckenschmidt, H.: A probabilistic ontological framework for the recognition of multilevel human activities. In: ACM International Joint Conference on Pervasive and Ubiquitous Computing (2013)

    Google Scholar 

  5. Iosifidis, A., Tefas, A., Pitas, I.: Multi-view human action recognition under occlusion based on fuzzy distances and neural networks. In: EUSIPCO. IEEE (2012)

    Google Scholar 

  6. Ji, Y., Yang, Y., Shen, F., Shen, H.T., Li, X.: A survey of human action analysis in HRI applications. Trans. Circuits Syst. Video Technol. 30(7), 2114–2128 (2019)

    Article  Google Scholar 

  7. Koppula, H.S., Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. Trans. Pattern Anal. Mach. Intell. 38(1), 14–29 (2015)

    Article  Google Scholar 

  8. Li, T., Fan, L., Zhao, M., Liu, Y., Katabi, D.: Making the invisible visible: action recognition through walls and occlusions. In: ICCV (2019)

    Google Scholar 

  9. Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE (2010)

    Google Scholar 

  10. Milea, V., Frasincar, F., Kaymak, U.: tOWL: a temporal web ontology language. Trans. Syst. Man Cybern. 42, 268–281 (2011)

    Article  Google Scholar 

  11. Riboni, D., Pareschi, L., Radaelli, L., Bettini, C.: Is ontology-based activity recognition really effective? In: PERCOM Workshops. IEEE (2011)

    Google Scholar 

  12. Rodríguez, N.D., Cuéllar, M.P., Lilius, J., Calvo-Flores, M.D.: A fuzzy ontology for semantic modelling and recognition of human behaviour. Knowl.-Based Syst. 66, 46–60 (2014)

    Article  Google Scholar 

  13. Sarthou, G.: Mementar. https://github.com/sarthou/mementar

  14. Sarthou, G.: Overworld: assessing the geometry of the world for human-robot interaction. Robot. Autom. Lett. 8, 1874–1880 (2023)

    Article  Google Scholar 

  15. Sarthou, G., Clodic, A., Alami, R.: Ontologenius: a long-term semantic memory for robotic agents. In: RO-MAN. IEEE (2019)

    Google Scholar 

  16. Sarthou, G., Mayima, A., Buisan, G., Belhassein, K., Clodic, A.: The director task: a psychology-inspired task to assess cognitive and interactive robot architectures. In: RO-MAN. IEEE (2021)

    Google Scholar 

  17. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: ICPR. IEEE (2004)

    Google Scholar 

  18. Sebanz, N., Bekkering, H., Knoblich, G.: Joint action: bodies and minds moving together. Trends Cogn. Sci. 10, 70–76 (2006)

    Article  Google Scholar 

  19. Sree, K.V., Jeyakumar, G.: A computer vision based fall detection technique for home surveillance. In: Smys, S., Tavares, J.M.R.S., Balas, V.E., Iliyasu, A.M. (eds.) ICCVBIC 2019. AISC, vol. 1108, pp. 355–363. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37218-7_41

    Chapter  Google Scholar 

  20. Tho, Q.T., Hui, S.C., Fong, A.C.M., Cao, T.H.: Automatic fuzzy ontology generation for semantic web. Trans. Knowl. Data Eng. 18, 842–856 (2006)

    Article  Google Scholar 

  21. Tomasello, M., Carpenter, M., Call, J., Behne, T., Moll, H.: Understanding and sharing intentions: the origins of cultural cognition. Behav. Brain Sci. 28, 675–691 (2005)

    Article  Google Scholar 

  22. Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action recognition in video sequences using deep bi-directional LSTM with CNN features. IEEE Access 6, 1155–1166 (2017)

    Article  Google Scholar 

  23. Weinland, D., Özuysal, M., Fua, P.: Making action recognition robust to occlusions and viewpoint changes. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 635–648. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15558-1_46

    Chapter  Google Scholar 

  24. Weinland, D., Ronfard, R., Boyer, E.: A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 115, 224–241 (2011)

    Article  Google Scholar 

  25. Yavşan, E., Uçar, A.: Gesture imitation and recognition using Kinect sensor and extreme learning machines. Measurement 94, 852–861 (2016)

    Article  Google Scholar 

  26. Zhang, H., Reardon, C., Han, F., Parker, L.E.: SRAC: self-reflective risk-aware artificial cognitive models for robot response to human activities. In: ICRA. IEEE (2016)

    Google Scholar 

Download references

Acknowledgements

This work has been supported by the Artificial Intelligence for Human-Robot Interaction (AI4HRI) project ANR-20-IADJ-0006 and DISCUTER project ANR-21-ASIA-0005.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrien Vigné .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vigné, A., Sarthou, G., Clodic, A. (2024). Primitive Action Recognition Based on Semantic Facts. In: Ali, A.A., et al. Social Robotics. ICSR 2023. Lecture Notes in Computer Science(), vol 14453 . Springer, Singapore. https://doi.org/10.1007/978-981-99-8715-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8715-3_29

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8714-6

  • Online ISBN: 978-981-99-8715-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics