Skip to main content

Pointing Gestures for Human-Robot Interaction in Service Robotics: A Feasibility Study

  • Conference paper
  • First Online:
Computers Helping People with Special Needs (ICCHP-AAATE 2022)

Abstract

Research in service robotics strives at having a positive impact on people’s quality of life by the introduction of robotic helpers for everyday activities. From this ambition arises the need of enabling natural communication between robots and ordinary people. For this reason, Human-Robot Interaction (HRI) is an extensively investigated topic, exceeding language-based exchange of information, to include all the relevant facets of communication. Each aspect of communication (e.g. hearing, sight, touch) comes with its own peculiar strengths and limits, thus they are often combined to improve robustness and naturalness. In this contribution, an HRI framework is presented, based on pointing gestures as the preferred interaction strategy. Pointing gestures are selected as they are an innate behavior to direct another attention, and thus could represent a natural way to require a service to a robot. To complement the visual information, the user could be prompted to give voice commands to resolve ambiguities and prevent the execution of unintended actions. The two layers (perceptive and semantic) architecture of the proposed HRI system is described. The perceptive layer is responsible for objects mapping, action detection, and assessment of the indicated direction. Moreover, it has to listen to uses’ voice commands. To avoid privacy issues and not burden the computational resources of the robot, the interaction would be triggered by a wake-word detection system. The semantic layer receives the information processed by the perceptive layer and determines which actions are available for the selected object. The decision is based on object’s characteristics, contextual information and user vocal feedbacks are exploited to resolve ambiguities. A pilot implementation of the semantic layer is detailed, and qualitative results are shown. The preliminary findings on the validity of the proposed system, as well as on the limitations of a purely vision-based approach, are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Azari, B., Lim, A., Vaughan, R.: Commodifying pointing in HRI: simple and fast pointing gesture detection from RGB-D images. In: 2019 16th Conference on Computer and Robot Vision (CRV), pp. 174–180 (2019). https://doi.org/10.1109/CRV.2019.00031

  2. Bolt, R.A.: “Put-That-There”: voice and gesture at the graphics interface. In: Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, pp. 262–270, SIGGRAPH 1980. Association for Computing Machinery, New York, NY, USA (1980). https://doi.org/10.1145/800250.807503

  3. Bonarini, A.: Communication in human-robot interaction. Curr. Robot. Rep. 1(4), 279–285 (2020). https://doi.org/10.1007/s43154-020-00026-1

    Article  Google Scholar 

  4. Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: CVPR (2017)

    Google Scholar 

  5. Gromov, B., Abbate, G., Gambardella, L.M., Giusti, A.: Proximity human-robot interaction using pointing gestures and a wrist-mounted IMU. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8084–8091 (2019). https://doi.org/10.1109/ICRA.2019.8794399

  6. Ji, Y., Yang, Y., Shen, F., Shen, H.T., Li, X.: A survey of human action analysis in HRI applications. IEEE Trans. Circuits Syst. Video Technol. 30(7), 2114–2128 (2020). https://doi.org/10.1109/TCSVT.2019.2912988

    Article  Google Scholar 

  7. Nickel, K., Stiefelhagen, R.: Visual recognition of pointing gestures for human-robot interaction. Image Vis. Comput. 25(12), 1875–1884 (2007)

    Article  Google Scholar 

  8. Onnasch, L., Roesler, E.: A taxonomy to structure and analyze human–robot interaction. Int. J. Soc. Robot. 13(4), 833–849 (2020). https://doi.org/10.1007/s12369-020-00666-5

    Article  Google Scholar 

  9. Osokin, D.: Real-time 2D multi-person pose estimation on CPU: lightweight OpenPose. arXiv preprint arXiv:1811.12004 (2018)

  10. Ozturkcan, S., Merdin-Uygur, E.: Humanoid service robots: the future of healthcare? J. Inf. Technol. Teach. Cases 20438869211003905 (2021). Prepublished 23 June 2021

    Google Scholar 

  11. Pagès, J., Marchionni, L., Ferro, F.: TIAGo: the modular robot that adapts to different research needs (2016)

    Google Scholar 

  12. Shan, J., Akella, S.: 3D human action segmentation and recognition using pose kinetic energy. In: 2014 IEEE International Workshop on Advanced Robotics and its Social Impacts, pp. 69–75 (2014). https://doi.org/10.1109/ARSO.2014.7020983

  13. Showers, A., Si, M.: Pointing estimation for human-robot interaction using hand pose, verbal cues, and confidence heuristics. In: Meiselwitz, G. (ed.) SCSM 2018. LNCS, vol. 10914, pp. 403–412. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91485-5_31

    Chapter  Google Scholar 

  14. Wirtz, J., et al.: Brave new world: service robots in the frontline. J. Serv. Manage. 29, 907–931 (2018). https://doi.org/10.1108/JOSM-04-2018-0119

  15. Čapek, K., R.U.R.: Rossum’s Universal Robots. Aventinum (1920)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Pozzi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pozzi, L., Gandolla, M., Roveda, L. (2022). Pointing Gestures for Human-Robot Interaction in Service Robotics: A Feasibility Study. In: Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P. (eds) Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science, vol 13342. Springer, Cham. https://doi.org/10.1007/978-3-031-08645-8_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08645-8_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08644-1

  • Online ISBN: 978-3-031-08645-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics