ABSTRACT
The paper presents an ongoing work that aims for real-time action recognition specifically tailored for child-centered research. To this end, we collected and annotated a dataset of 200 primary school children aged 6 to 11 years old. Each child was asked to perform seven actions: boxing, waving, clapping, running, jogging, walking towards the camera, and walking from side to side. Two camera perspectives are provided, with a top view in RGB format and a frontal view in both RGB and RGB-D formats. Body keypoints (skeleton data) are extracted using OpenPose and OpenNI tools. The results of this work are expected to bridge the performance gap between activity recognition systems for adults and children.
- Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2018. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008 .Google Scholar
- James Kennedy, Séverin Lemaignan, Caroline Montassier, Pauline Lavalade, Bahar Irfan, Fotios Papadopoulos, Emmanuel Senft, and Tony Belpaeme. 2017. Child speech recognition in human-robot interaction: evaluations and recommendations. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 82--90.Google ScholarDigital Library
- Anara Sandygulova, Yerdaulet Absattar, Damir Doszhan, and German I Parisi. 2016. Child-centred motion-based age and gender estimation with neural network learning. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence .Google Scholar
- Anara Sandygulova, Mauro Dragone, and Gregory MP O'Hare. 2014. Real-time adaptive child-robot interaction: Age and gender determination of children based on 3d body metrics. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 826--831.Google ScholarCross Ref
- Anara Sandygulova, Wafa Johal, Zhanel Zhexenova, Bolat Tleubayev, Aida Zhanatkyzy, Aizada Turarova, Zhansaule Telisheva, Anna CohenMiller, Thibault Asselborn, and Pierre Dillenbourg. 2020. CoWriting Kazakh: Learning a New Script with a Robot. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI '20). Cambridge, United Kingdom. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3319502.3374813Google ScholarDigital Library
- Anara Sandygulova and Gregory MP O'Hare. 2018. Age-and gender-based differences in children's interactions with a gender-matching robot. International Journal of Social Robotics , Vol. 10, 5 (2018), 687--700.Google ScholarCross Ref
- Christian Schuldt, Ivan Laptev, and Barbara Caputo. 2004. Recognizing human actions: a local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., Vol. 3. IEEE, 32--36.Google ScholarCross Ref
Index Terms
- Child Action Recognition in RGB and RGB-D Data
Recommendations
Automatic Engagement Recognition of Children within Robot-Mediated Autism Therapy
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot InteractionThis paper describes a work in progress that aims to design automatic engagement recognition of Robot-Mediated Therapy (RMT) tailored for children with a diverse form of Autism Spectrum Disorder (ASD) in co-occurrence with Attention Deficit ...
MAMAS: Supporting Parent--Child Mealtime Interactions Using Automated Tracking and Speech Recognition
CSCWMany parents of young children find it challenging to deal with their children's eating problems, and parent--child mealtime interaction is fundamental in forming children's healthy eating habits. In this paper, we present the results of a three-week ...
Tangible interaction in parent-child collaboration: encouraging awareness and reflection
IDC '18: Proceedings of the 17th ACM Conference on Interaction Design and ChildrenParent-child interaction during a collaborative activity can empower children if parents are able to envision their child's mental state and regulate their behavior. However, this ability is a great challenge for many parents. We designed a simple ...
Comments