Abstract
The ability to build robotic agents that can perform everyday tasks heavily depends on understanding how humans perform them. In order to achieve close to human understanding of a task and generate a formal representation of it, it is important to jointly reason about the human actions and the objects that are being acted on. We present a robotic perception framework for perceiving actions performed by a human in a household environment that can be used to answer questions such as “which object did the human act on?” or “which actions did the human perform?”. To do this we extend the RoboSherlock framework with the capabilities of detecting humans and objects at the same time, while simultaneously reasoning about the possible actions that are being performed.






Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Yazdani F, Brieber B, Beetz M (2014) Cognition-enabled robot control for mixed human-robot rescue teams. In: Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13)
Beetz M, Bartels G, Albu-Schäffer A, Bálint-Benczédi F, Belder R, Beßler D, Haddadin S, Maldonado A, Mansfeld N, Wiedemeyer T, Weitschat R, Worch JH (2015) Robotic agents capable of natural and safe physical interaction with human co-workers. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany
Beetz M, Jain D, Mosenlechner L, Tenorth M, Kunze L, Blodow N, Pangercic D (2012) Cognition-enabled autonomous robot control for the realization of home chore task intelligence. Proc IEEE 100(8):2454–2471
Beetz M, Tenorth M, Winkler J (2015) Open-EASE—a knowledge processing service for robots and robotics/ai researchers. In: IEEE International Conference on Robotics and Automation (ICRA). Seattle, Washington
Beetz M, Balint-Benczedi F, Blodow N, Nyga D, Wiedemeyer T, Marton ZC (20150 RoboSherlock: unstructured information processing for robot perception. In: IEEE International Conference on Robotics and Automation (ICRA). Seattle, Washington
Ferrucci D, Lally A (2004) UIMA: an architectural approach to unstructured information processing in the corporate research environment. In: Natural Language Engineering, vol. 10, pp. 3–4. Cambridge University Press
Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, Lally A, Murdock JW, Nyberg E, Prager J, Schlaefer N, Welty C (2010) Building Watson: an overview of the DeepQA project. AI Mag 31(3):59–79
Nyga D, Balint-Benczedi F, Beetz M (2014) PR2 Looking at things: ensemble learning for unstructured information processing with Markov logic networks. In: IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China
Tenorth M, Beetz M (2009) KnowRob—knowledge processing for autonomous personal robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4261–4266
Wiedemeyer T, Balint-Benczedi F, Beetz M (2015) Pervasive ’calm’ perception for autonomous robotic agents. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagen Systems. ACM, Istanbul
Patterson D, Fox D, Kautz H, Philipose M (2005) Fine-grained activity recognition by aggregating abstract object usage. In: Proc. of the International Symposium on Wearable Computers
Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Springer Hand-book of Robotics, Springer, ch. 59
Kim S, Shukla A, Billard A (2014) Catching objects in flight. EEE Trans Robot 30(5):1049–1065
Nguyen H, Jain A, Anderson C, Kemp C (2008) A clickable world: behavior selection through pointing and context for mobile manipulation. In: Intelligent robots and systems. IROS 2008. IEEE/RSJ International Conference on, pp. 787–793
Blodow N, Marton ZC, Pangercic D, Rühr T, Tenorth M, Beetz M (2011) Inferring generalized pick-and-place tasks from pointing gestures. In: IEEE International Conference on Robotics and Automation (ICRA), Workshop on Semantic Perception, Mapping and Exploration
Nyga D, Tenorth M, Beetz M (2011) How-models of human reaching movements in the context of everyday manipulation activities. In: IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China
Haidu A, Kohlsdorf D, Beetz M (2014) Learning task outcome prediction for robot control from interactive environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Chicago, USA
Acknowledgments
This work was supported in part by the EU FP7 Projects RoboHow (Grant Agreement Number 288533), SAPHARI (Grant Agreement Number 287513) and ACAT (Grant Agreement Number 600578) and by the German Research Foundation (DFG) as part of the Project MeMoMan2.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Worch, JH., Bálint-Benczédi, F. & Beetz, M. Perception for Everyday Human Robot Interaction. Künstl Intell 30, 21–27 (2016). https://doi.org/10.1007/s13218-015-0400-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13218-015-0400-1