Skip to main content

A Novel Gaze-Point-Driven HRI Framework for Single-Person

  • Conference paper
  • First Online:

Abstract

Human-robot interaction (HRI) is a required method of information interaction in the age of intelligence. The new human-robot collaboration work mode is based on this information interaction method. Most of the existing HRI strategies have some limitations: Firstly, limb-based HRI relies heavily on the user’s physical movements, making interaction impossible when physical activity is limited. Secondly, voice-based HRI is vulnerable to noise in the interaction environment. Lastly, while gaze-based HRI reduces the reliance on physical movements and the impact of noise in the interaction environment, external wearables result in a less convenient and natural interaction process and increase costs. This paper proposed a novel gaze-point-driven interaction framework using only RGB cameras to provide a more convenient and less restricted way of interaction. At first, gaze points are estimated from images captured by cameras. Then, targets can be determined by matching these points and positions of objects. At last, objects gazed at by an interactor can be grabbed by the robot. Experiments under conditions of different lighting, distances, and different users on the Baxter robot show the robustness of this framework.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Acien, A., Morales, A., Vera-Rodriguez, R., Fierrez, J.: Smartphone sensors for modeling human-computer interaction: general outlook and research datasets for user authentication. In: 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 1273–1278. IEEE Computer Society, Los Alamitos (2020)

    Google Scholar 

  2. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  3. Cheng, Y., Zhang, X., Lu, F., Sato, Y.: Gaze estimation by exploring two-eye asymmetry. IEEE Trans. Image Process. 29, 5259–5272 (2020)

    Article  Google Scholar 

  4. Chong, E., Wang, Y., Ruiz, N., Rehg, J.M.: Detecting attended visual targets in video. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5395–5405. IEEE Computer Society, Los Alamitos (2020)

    Google Scholar 

  5. Chong, E., Ruiz, N., Wang, Y., Zhang, Y., Rozga, A., Rehg, J.M.: Connecting Gaze, scene, and attention: generalized attention estimation via joint modeling of gaze and scene saliency. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 397–412. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_24

    Chapter  Google Scholar 

  6. Dias, P.A., Malafronte, D., Medeiros, H., Odone, F.: Gaze estimation for assisted living environments. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 279–288. IEEE, Snowmass Village, Colorado (2020)

    Google Scholar 

  7. Drakopoulos, P., Koulieris, G.A., Mania, K.: Front camera eye tracking for mobile VR. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 642–643. Atlanta (2020)

    Google Scholar 

  8. Dziemian, S., Abbott, W.W., Faisal, A.A.: Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: writing & drawing. In: 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp. 1277–1282. IEEE, University Town, Singapore (2016)

    Google Scholar 

  9. Gêgo, D., Carreto, C., Figueiredo, L.: Teleoperation of a mobile robot based on eye-gaze tracking. In: 2017 12th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6. IEEE, Lisbon, Portugal (2017)

    Google Scholar 

  10. Hosp, B., Eivazi, S., Maurer, M., Fuhl, W., Geisler, D., Kasneci, E.: Remoteeye: an open-source high-speed remote eye tracker:implementation insights of a pupil- and glint-detection algorithm for high-speed remote eye tracking. Behav. Res. Methods 52(3), 1387–1401 (2020)

    Article  Google Scholar 

  11. Kuo, T.L., Fan, C.P.: Design and implementation of deep learning based pupil tracking technology for application of visible-light wearable eye tracker. In: 2020 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–2. IEEE, Las Vegas (2020)

    Google Scholar 

  12. Lee, K.F., Chen, Y.L., Yu, C.W., Chin, K.Y., Wu, C.H.: Gaze tracking and point estimation using low-cost head-mounted devices. Sensors 20(7) (2020)

    Google Scholar 

  13. Li, X.: Human-robot interaction based on gesture and movement recognition. Sig. Process. Image Commun. 81, 115686 (2020)

    Google Scholar 

  14. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  15. Liu, J., Chang, W., Li, J., Wang, J.: Design and implementation of human-computer interaction intelligent system based on speech control. Comput.-Aid. Des. Appl. 17, 22–34 (2020)

    Article  Google Scholar 

  16. Liu, M., Fu Li, Y., Liu, H.: 3D gaze estimation for head-mounted devices based on visual saliency. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10611–10616. IEEE, Las Vegas (2020)

    Google Scholar 

  17. Park, S., Spurr, A., Hilliges, O.: Deep pictorial gaze estimation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 741–757. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_44

    Chapter  Google Scholar 

  18. Penkov, S., Bordallo, A., Ramamoorthy, S.: Physical symbol grounding and instance learning through demonstration and eye tracking. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5921–5928. IEEE, Marina Bay Sands (2017)

    Google Scholar 

  19. Radhakrishnan, P.: Head-detection-using-yolo. https://github.com/pranoyr/head-detection-using-yolo. Accessed 4 Oct 2020

  20. Saran, A., Majumdar, S., Short, E.S., Thomaz, A., Niekum, S.: Human gaze following for human-robot interaction. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8615–8621. IEEE, Madrid (2018)

    Google Scholar 

  21. Tostado, P.M., Abbott, W.W., Faisal, A.A.: 3D gaze cursor: continuous calibration and end-point grasp control of robotic actuators. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3295–3300. IEEE, Stockholm (2016)

    Google Scholar 

  22. Tsai, T.H., Huang, C.C., Zhang, K.L.: Design of hand gesture recognition system for human-computer interaction. Multimedia Tools Appl. 79(9), 5989–6007 (2020)

    Google Scholar 

  23. Wang, M.Y., Kogkas, A.A., Darzi, A., Mylonas, G.P.: Free-view, 3D gaze-guided, assistive robotic system for activities of daily living. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2355–2361. IEEE, Madrid (2018)

    Google Scholar 

  24. Weber, D., Santini, T., Zell, A., Kasneci, E.: Distilling location proposals of unknown objects through gaze information for human-robot interaction. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11086–11093. IEEE, Las Vegas (2020)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by the Key Program of NSFC (Grant No. U1908214), Special Project of Central Government Guiding Local Science and Technology Development (Grant No. 2021JH6/10500140), Program for the Liaoning Distinguished Professor, Program for Innovative Research Team in University of Liaoning Province, Dalian and Dalian University, the Scientific Research fund of Liaoning Provincial Education Department (No. L2019606), Dalian University Scientific Research Platform project, and in part by the Science and Technology Innovation Fund of Dalian (Grant No. 2020JJ25CY001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongsheng Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, W. et al. (2021). A Novel Gaze-Point-Driven HRI Framework for Single-Person. In: Gao, H., Wang, X. (eds) Collaborative Computing: Networking, Applications and Worksharing. CollaborateCom 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 406. Springer, Cham. https://doi.org/10.1007/978-3-030-92635-9_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92635-9_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92634-2

  • Online ISBN: 978-3-030-92635-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics