Skip to main content
Log in

Foundations of Visual Linear Human–Robot Interaction via Pointing Gesture Navigation

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

This paper presents a human–robot interaction method for controlling an autonomous mobile robot with a referential pointing gesture. A human user points to a specific location, robot detects the pointing gesture, computes its intersection with surrounding planar surface and moves to the destination. A depth camera mounted on the chassis is used. The user does not need to wear any extra clothing or markers. The design includes necessary mathematical concepts such as transformations between coordinate systems and vector abstraction of features needed for simple navigation, which other current research works misses. We provide experimental evaluation with derived probability models. We term this approach “Linear HRI” and define 3 laws of Linear HRI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26

Similar content being viewed by others

References

  1. Mavridis N (2015) A review of verbal and non-verbal human-robot interactive communication. Robot Auton Syst 63(1):22–35

    Article  MathSciNet  Google Scholar 

  2. Yoshida K, Hibino F, Takahashi Y, Maeda Y (2011) Evaluation of pointing navigation interface for mobile robot with spherical vision system. In: Proceedings of the IEEE international conference on fuzzy systems, pp 721–726

  3. Pateraki M, Baltzakis H, Trahanias P (2011) Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation. In: Proceedings of the IEEE international conference on computer vision workshops, pp 1060–1067

  4. Droeschel D, Stückler J, Behnke S (2011) Learning to interpret pointing gestures with a time-of-flight camera. In: Proceedings of the IEEE international conference on human–robot interaction, pp 481–488

  5. Fransen BR, Lawson WE, Bugajska MD (2010) Integrating vision for human-robot interaction. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition—workshops, pp 9–16

  6. Li Z, Jarvis R (2010) Visual interpretation of natural pointing gestures in 3D space for human-robot interaction. In: Proceedings of the IEEE international conference on control, automation, robotics and vision, pp 2513–2518

  7. Van Den Bergh M, Carton D, De Nijs R, Mitsou N, Landsiedel C, Kuehnlenz K, Wollherr D, Van Gool L, Buss M (2011) Real-time 3D hand gesture interaction with a robot for understanding directions from humans. In: Proceedings of the IEEE international symposium on robot and human interactive communicationm, pp 357–362

  8. Quintero CP, Fomena RT, Shademan A, Wolleb N, Dick T, Jagersand M (2013) SEPO: selecting by pointing as an intuitive human-robot command interface. In: Proceedings of the IEEE international conference on robotics and automation, pp 1166–1171

  9. Høilund C, Krüger V, Moeslund TB (2012) Evaluation of human body tracking system for gesture-based programming of industrial robots. In: Proceedings of the conference on industrial electronics and applications, pp 477–480

  10. Pourmehr S, Monajjemi V, Wawerla J, Vaughan R, Mori G (2013) A robust integrated system for selecting and commanding multiple mobile robots. In: Proceedings of the IEEE international conference on robotics and automation, pp 2874–2879

  11. Abidi S, Williams M, Johnston B (2013) Human pointing as a robot directive. In: Proceedings of the IEEE international conference on human–robot interaction, pp 67–68

  12. Xiao Y, Zhang Z, Beck A, Yuan J, Thalmann D (2014) Human–robot interaction by understanding upper body gestures. Presence: teleoperators and virtual environments, vol. 23, Issue 2, Springer, pp. 133–154

  13. Alvarez-Santos V, Iglesias R, Pardo XM, Regueiro CV, Canedo-Rodriguez A (2014) Gesture-based interaction with voice feedback for a tour-guide robot. J Vis Commun Image Represent 25(2):499–509

    Article  Google Scholar 

  14. Hussain AT, Ahmed SF, Hazry D (2015) Tracking and replication of hand movements by teleguided intelligent manipulator robot. Robotica 33(1):141–156

    Article  Google Scholar 

  15. Gil P, Mateo C, Torres F (2014) 3D visual sensing of the human hand for the remote operation of a robotic hand. Int J Adv Robot Syst 11(1):1–12

    Article  Google Scholar 

  16. Freedman et al. (2010) Depth Mapping Using Projected Patterns, U.S. Patent 2010/0118123 A1

  17. Reichinger A (2011) Kinect pattern uncovered. http://azttm.wordpress.com/2011/04/03/kinect-pattern-uncovered/

  18. Klug B (2010) Microsoft Kinect: the AnandTech review. http://www.anandtech.com/show/4057/microsoft-kinect-the-anandtech-review/2

  19. Konolige K, Mihelich P (2012) Kinect operation. http://wiki.ros.org/kinect_calibration/technical

  20. Andersen MR, Jensen T, Lisouski P, Mortensen AK, Hansen MK, Gregersen T, Ahrendt P (2012) Kinect depth sensor evaluation for computer vision applications. Department of Engineering, Aarhus University. Denmark. 37 pp. - Technical report ECE-TR-6

  21. Kramer J, Burrus N, Echtler F, Herrera DC, Parker M (2012) Hardware. In: Hacking the Kinect, Apress, pp 11–13

  22. Shotton J, Girshick R, Fitzgibbon A, Sharp T, Cook M, Finocchio M, Moore R, Kohli P, Criminisi A, Kipman A, Blake A (2013) Efficient human pose estimation from single depth images. In: Proceedings of the IEEE transactions on pattern analysis and machine intelligence, pp 2821–2840

  23. Costante G, Bellocchio E, Valigi P, Ricci E (2014) Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach. In Proceedings of the IEEE international conference on intelligent robots and systems, pp 3319–3326

  24. Monajjemi VM, Wawerla J, Vaughan R, Mori G (2013) HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, pp 617–623

  25. Schauerte B, Stiefelhagen R (2014) Look at this! learning to guide visual saliency in human–robot interaction. In: Proceedings of the 2014 IEEE/RSJ international conference on intelligent robots and systems, pp 995–1002

  26. Nakamura A, Ukai M, Wu X, Furuhashi H (2015) Pointing gesture recognition using robot head control. In: Proceedings of the international conference on computational science and computational intelligence (CSCI), pp 849–850

  27. Shukla D, Erkent O, Piater J (2015) Detection of pointing directions for human–robot interaction. In: Proceedings of the international conference on digital image computing: techniques and applications (DICTA), pp 1–8

  28. Canal G, Escalera S, Angulo C (2016) A real-time human–robot interaction system based on gestures for assistive scenarios. In: Special issue on assistive computer vision and robotics, pp 65–77

  29. Jevtić A, Doisy G, Parmet Y, Edan Y (2015) Comparison of interaction modalities for mobile indoor robot guidance: direct physical interaction, person following, and pointing control. In: IEEE transactions on human–machine systems, pp 653–663

  30. Jevtić A, Doisy G, Bodiroza S, Edan Y, Hafner VV (2014) Human–robot interaction through 3D vision and force control. In: Proceedings of the ACM/IEEE international conference human–robot interaction, pp 102–102

  31. Tölgyessy M, Chovanec L’, Pásztó P, Hubinský P (2014) A plane based real-time algorithm for controlling a semi-autonomous robot with hand gestures using the Kinect. Int J Imaging Robot 13(2):126–133

    Google Scholar 

  32. Tölgyessy M, Hubinský P, Chovanec L’, Duchoň F, Babinec A (2017) Controlling a group of robots to perform a common task by gestures only. Int J Imaging Robot 17(1):1–13

    Google Scholar 

Download references

Acknowledgements

This study was funded by APVV-14-0894, VEGA 1/0065/16, KEGA 003STU-4/2014 and by the company Aerobtec.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Tölgyessy.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

This work was supported by Grants APVV-14-0894, VEGA 1/0065/16, KEGA 003STU-4/2014 and by the company Aerobtec.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tölgyessy, M., Dekan, M., Duchoň, F. et al. Foundations of Visual Linear Human–Robot Interaction via Pointing Gesture Navigation. Int J of Soc Robotics 9, 509–523 (2017). https://doi.org/10.1007/s12369-017-0408-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-017-0408-9

Keywords

Navigation