Abstract
To interact with humans, a robot has to know actions done by each agent presents in the environment, robotic or not. Robots are not omniscient and can’t perceive every actions made but, as humans do, we can equip the robot with the ability to infer what happens from the perceived effects of these actions on the environment.
In this paper, we present a lightweight and open-source framework to recognise primitive actions and their parameters. Based on a semantic abstraction of changes in the environment, it allows to recognise unperceived actions. In addition, thanks to its integration into a cognitive robotic architecture implementing perspective-taking and theory of mind, the presented framework is able to estimate the actions recognised by the agent interacting with the robot. These recognition processes are refined on the fly based on the current observations. Tests on real robots demonstrate the framework’s usability in interactive contexts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The agent is not perceived because it has not been equipped to do this.
- 3.
Video: https://youtu.be/cwLLEAA_mCY.
- 4.
References
Aggarwal, J.K., Cai, Q., Liao, W., Sabata, B.: Nonrigid motion analysis: articulated and elastic motion. Comput. Vis. Image Underst. 70(2), 142–156 (1998)
Al-Faris, M., Chiverton, J., Ndzi, D., Ahmed, A.I.: A review on computer vision-based methods for human action recognition. J. Imaging 6, 46 (2020)
Díaz-Rodríguez, N., Cadahía, O.L., Cuéllar, M.P., Lilius, J., Calvo-Flores, M.D.: Handling real-world context awareness, uncertainty and vagueness in real-time human activity tracking and recognition with a fuzzy ontology-based hybrid method. Sensors 14(10), 18131–18171 (2014)
Helaoui, R., Riboni, D., Stuckenschmidt, H.: A probabilistic ontological framework for the recognition of multilevel human activities. In: ACM International Joint Conference on Pervasive and Ubiquitous Computing (2013)
Iosifidis, A., Tefas, A., Pitas, I.: Multi-view human action recognition under occlusion based on fuzzy distances and neural networks. In: EUSIPCO. IEEE (2012)
Ji, Y., Yang, Y., Shen, F., Shen, H.T., Li, X.: A survey of human action analysis in HRI applications. Trans. Circuits Syst. Video Technol. 30(7), 2114–2128 (2019)
Koppula, H.S., Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. Trans. Pattern Anal. Mach. Intell. 38(1), 14–29 (2015)
Li, T., Fan, L., Zhao, M., Liu, Y., Katabi, D.: Making the invisible visible: action recognition through walls and occlusions. In: ICCV (2019)
Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE (2010)
Milea, V., Frasincar, F., Kaymak, U.: tOWL: a temporal web ontology language. Trans. Syst. Man Cybern. 42, 268–281 (2011)
Riboni, D., Pareschi, L., Radaelli, L., Bettini, C.: Is ontology-based activity recognition really effective? In: PERCOM Workshops. IEEE (2011)
Rodríguez, N.D., Cuéllar, M.P., Lilius, J., Calvo-Flores, M.D.: A fuzzy ontology for semantic modelling and recognition of human behaviour. Knowl.-Based Syst. 66, 46–60 (2014)
Sarthou, G.: Mementar. https://github.com/sarthou/mementar
Sarthou, G.: Overworld: assessing the geometry of the world for human-robot interaction. Robot. Autom. Lett. 8, 1874–1880 (2023)
Sarthou, G., Clodic, A., Alami, R.: Ontologenius: a long-term semantic memory for robotic agents. In: RO-MAN. IEEE (2019)
Sarthou, G., Mayima, A., Buisan, G., Belhassein, K., Clodic, A.: The director task: a psychology-inspired task to assess cognitive and interactive robot architectures. In: RO-MAN. IEEE (2021)
Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: ICPR. IEEE (2004)
Sebanz, N., Bekkering, H., Knoblich, G.: Joint action: bodies and minds moving together. Trends Cogn. Sci. 10, 70–76 (2006)
Sree, K.V., Jeyakumar, G.: A computer vision based fall detection technique for home surveillance. In: Smys, S., Tavares, J.M.R.S., Balas, V.E., Iliyasu, A.M. (eds.) ICCVBIC 2019. AISC, vol. 1108, pp. 355–363. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37218-7_41
Tho, Q.T., Hui, S.C., Fong, A.C.M., Cao, T.H.: Automatic fuzzy ontology generation for semantic web. Trans. Knowl. Data Eng. 18, 842–856 (2006)
Tomasello, M., Carpenter, M., Call, J., Behne, T., Moll, H.: Understanding and sharing intentions: the origins of cultural cognition. Behav. Brain Sci. 28, 675–691 (2005)
Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action recognition in video sequences using deep bi-directional LSTM with CNN features. IEEE Access 6, 1155–1166 (2017)
Weinland, D., Özuysal, M., Fua, P.: Making action recognition robust to occlusions and viewpoint changes. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 635–648. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15558-1_46
Weinland, D., Ronfard, R., Boyer, E.: A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 115, 224–241 (2011)
Yavşan, E., Uçar, A.: Gesture imitation and recognition using Kinect sensor and extreme learning machines. Measurement 94, 852–861 (2016)
Zhang, H., Reardon, C., Han, F., Parker, L.E.: SRAC: self-reflective risk-aware artificial cognitive models for robot response to human activities. In: ICRA. IEEE (2016)
Acknowledgements
This work has been supported by the Artificial Intelligence for Human-Robot Interaction (AI4HRI) project ANR-20-IADJ-0006 and DISCUTER project ANR-21-ASIA-0005.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Vigné, A., Sarthou, G., Clodic, A. (2024). Primitive Action Recognition Based on Semantic Facts. In: Ali, A.A., et al. Social Robotics. ICSR 2023. Lecture Notes in Computer Science(), vol 14453 . Springer, Singapore. https://doi.org/10.1007/978-981-99-8715-3_29
Download citation
DOI: https://doi.org/10.1007/978-981-99-8715-3_29
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8714-6
Online ISBN: 978-981-99-8715-3
eBook Packages: Computer ScienceComputer Science (R0)