skip to main content
research-article

Comparing Two Safe Distance Maintenance Algorithms for a Gaze-Controlled HRI Involving Users with SSMI

Published:08 July 2022Publication History
Skip Abstract Section

Abstract

People with severe speech and motor impairment often find it difficult to manipulate physical objects due to spasticity and have familiarity with eye pointing based communication. This article presents a novel eye gaze controlled augmented reality human-robot interface that maintains a safe distance of the robot from the operator. We used a bespoke appearance-based eye gaze tracking algorithm and compared two different safe distance maintenance algorithms. We undertook simulation studies followed by a user trial involving end users. Users with severe speech and motor impairment could bring the robotic arm at any designated point within its working envelope in less than 3 minutes.

REFERENCES

  1. A. Agarwal, D. V. JeevithaShree, K. S. Saluja, A. Sahay, P. Mounika, A. Sahu, and P. Biswas. 2019. Comparing two webcam-based eye gaze trackers for users with severe speech and motor impairment. In Research into Design for a Connected World. Springer, Singapore, 641--652.Google ScholarGoogle Scholar
  2. Aliasghari P., Ghafurian M., Nehaniv C. L., and Dautenhahn K.. 2021. Effects of gaze and arm motion kinesics on a humanoid's perceived confidence, eagerness to learn, and attention to the task in a teaching scenario. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. 197206.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Alsharif. 2018. Gaze-Based Control of Robot Arm in Three-Dimensional Space Doctoral dissertation, Universität Bremen.Google ScholarGoogle Scholar
  4. Reuben M. Aronson and Henny Admoni. 2020. Eye gaze for assistive manipulation. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 552--554.Google ScholarGoogle Scholar
  5. Alsharif S., Kuzmicheva O., and Gräser A.. 2016. Gaze gesture-based human-robot interface. In Proceedings of Technische Unterstützungssysteme, die Menschen wirklich wollen.Google ScholarGoogle Scholar
  6. Asimov I.. 2004. I, Robot. Random House, New York, NY.Google ScholarGoogle Scholar
  7. Babu M. D., Jeevithashree D. V., Prabhakar G., Saluja K. P. S., and Biswas P.. 2019. Estimating pilots’ cognitive load from ocular parameters through simulation and in-flight studies. Journal of Eye Movement Research 12, 3 (2019), 10.16910/jemr.12.3.3.Google ScholarGoogle ScholarCross RefCross Ref
  8. Bannat A., Gast J., Rehrl T., Rösel W., Rigoll G., and Wallhoff F.. 2009. A multimodal human-robot-interaction scenario: Working together with an industrial robot. In Proceedings of the International Conference on Human-Computer Interaction. 303311.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Betke M., Gips J., and Fleming P.. 2002. The camera mouse: visual tracking of body features to provide computer access for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering 10, 1 (2002), 110.Google ScholarGoogle ScholarCross RefCross Ref
  10. Biswas P.. 2016. Exploring the use of eye gaze-controlled interfaces in automotive environments. Springer.Google ScholarGoogle Scholar
  11. Biswas P. and Jeevithashree D. V.. 2018. Eye-gaze controlled MFD for military aviation. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Biswas P. and Prabhakar G.. 2018. Detecting drivers’ cognitive load from saccadic intrusion. Transportation Research Part F: Traffic Psychology and Behaviour 54 (2018), 6378.Google ScholarGoogle ScholarCross RefCross Ref
  13. Biswas P. and Robinson P.. 2010. Evaluating the design of inclusive interfaces by simulation. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI’10).Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Bosscher P. and Hedman D.. 2011. Real-time collision avoidance algorithm for robotic manipulators. Industrial Robot 38, 2 (2011), 186197.Google ScholarGoogle ScholarCross RefCross Ref
  15. Bremner P., Celiktutan O., and Gunes H.. 2016. Personality perception of robot avatar tele-operators. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI’16). IEEE, Los Alamitos, CA, 141148.Google ScholarGoogle ScholarCross RefCross Ref
  16. Cardenas I. S., Vitullo K. A., Park M., Kim J. H., Benitez M., Chen C., and Ohrn-McDaniels L.. 2019. Telesuit: An immersive user-centric telepresence control suit. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE, Los Alamitos, CA, 654655.Google ScholarGoogle ScholarCross RefCross Ref
  17. Chen Y. and Newman W. S.. 2004. A human-robot interface based on electrooculography. In Proceedings of the IEEE International Conference on Robotics and Automation.Google ScholarGoogle Scholar
  18. Deng J., Dong W., Socher R., Li L. J., Li K., and Fei-Fei L.. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  19. Dent S.. 2021. MIT robot could help people with limited mobility dress themselves. Engadget. Retrieved July 30, 2021 from https://www.engadget.com/mit-csail-unveils-a-robot-that-helps-the-infirmed-dress-themselves-120016438.html.Google ScholarGoogle Scholar
  20. J. de Wit, A. Brandse, E. Krahmer, and P. Vogt. 2020. Varied human-like gestures for social robots: Investigating the effects on children's engagement and language learning. In 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 359--367.Google ScholarGoogle Scholar
  21. DOBOT. 2021. DOBOT Magician. Retrieved July 30, 2021 from https://www.dobot.cc/dobot-magician/product-overview.html.Google ScholarGoogle Scholar
  22. Dziemian Sabine, Abbott William W., and Faisal A. Aldo. 2016. Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing. In Proceedings of the 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob’16). IEEE, Los Alamitos, CA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. A. Eid, N. Giakoumidis, and A. El Saddik. 2016. A novel eye-gaze-controlled wheelchair system for navigating unknown environments: case study with a person with ALS. IEEE Access 4 (2016), 558--573.Google ScholarGoogle Scholar
  24. Fabrizio F. and De Luca A.. 2016. Real-time computation of distance to dynamic obstacles with multiple depth sensors. IEEE Robotics and Automation Letters 2, 1 (2016), 5663.Google ScholarGoogle ScholarCross RefCross Ref
  25. Fenucci A., Indri M., and Romanelli F.. 2014. A real time distributed approach to collision avoidance for industrial manipulators. In Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA’14). IEEE, Los Alamitos, CA, 18.Google ScholarGoogle ScholarCross RefCross Ref
  26. Flacco F., Kröger T., De Luca A., and Khatib O.. 2012. A depth space approach to human-robot collision avoidance. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation. IEEE, Los Alamitos, CA, 338345.Google ScholarGoogle ScholarCross RefCross Ref
  27. Fujii K., Salerno A., Sriskandarajah K., Kwok K. W., Shetty K., and Yang G. Z.. 2013. Gaze contingent Cartesian control of a robotic arm for laparoscopic surgery. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Los Alamitos, CA, 35823589.Google ScholarGoogle ScholarCross RefCross Ref
  28. Gomez A. R., Clarke C., Sidenmark L., and Gellersen H.. 2021. Gaze+ Hold: Eyes-only direct manipulation with continuous gaze modulated by closure of one eye. In Proceedings of the ACM Symposium on Eye-Tracking Research and Applications: Bridging Communities. ACM, New York, NY.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Google MediaPipe. 2021. MediaPipe Hands. Retrieved July 30, 2021 from https://google.github.io/mediapipe/solutions/hands.html.Google ScholarGoogle Scholar
  30. Hansen J. P., Alapetite A., MacKenzie I. S., and Møllenbach E.. 2014. The use of gaze to control drones. In Proceedings of the Symposium on Eye-Tracking Research and Applications. 2734.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Helms E., Schraft R. D., and Hagele M.. 2002. Rob@work: Robot assistant in industrial environments. In Proceedings of the 11th IEEE International Workshop on Robot and Human Interactive Communication. IEEE, Los Alamitos, CA.Google ScholarGoogle ScholarCross RefCross Ref
  32. ISO. 2019. ISO/TS 15066:2016: Robots and Robotic Devices—Collaborative Robots. Retrieved July 30, 2021 from https://www.iso.org/standard/62996.html.Google ScholarGoogle Scholar
  33. ISO. n.d. ISO 10218-1:2011: Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 1: Robots. Retrieved July 30, 2021 from https://www.iso.org/standard/51330.html.Google ScholarGoogle Scholar
  34. Joseph L. and Cacace J.. 2018. Mastering ROS for Robotics Programming: Design, Build, and Simulate Complex Robots Using the Robot Operating System. Packt Publishing Ltd.Google ScholarGoogle Scholar
  35. Keates S, Clarkson J., and Robinson P.. 2000. Investigating the applicability of user models for motion impaired users. In Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility. 129136.Google ScholarGoogle Scholar
  36. Kim D. H., Kim J. H., Yoo D. H., Lee Y. J., and Chung M. J.. 2001. A human-robot interface using eye-gaze tracking system for people with motor disabilities. Transaction on Control, Automation and Systems Engineering 3, 4 (2001), 229235.Google ScholarGoogle Scholar
  37. Kohlbecher S., Wiese E., Bartl K., Blume J., Bannat A., and Schneider E.. 2012. Studying gaze-based human-robot interaction: An experimental platform. In Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarGoogle Scholar
  38. Kuno Yoshinori, Shimada Nobutaka, and Shirai Yoshiaki. 2003. Look where you're going [robotic wheelchair]. IEEE Robotics & Automation Magazine 10, 1 (2003), 2634.Google ScholarGoogle ScholarCross RefCross Ref
  39. Leite I., Castellano G., Pereira A., Martinho C., and Paiva A.. 2012. Modelling empathic behaviour in a robotic game companion for children: An ethnographic study in real-world settings. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, 367374.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Lewis M., Wang J., and Hughes S.. 2007. USARSim: Simulation for the study of human-robot interaction. Journal of Cognitive Engineering and Decision Making 1, 1 (2007), 98120.Google ScholarGoogle ScholarCross RefCross Ref
  41. Lin C. S., Ho , Chen W. C., Chiu C. C., and Yeh M. S.. 2006. Powered wheelchair controlled by eye-tracking system. Optica Applicata 36, 2–3 (2006), 112.Google ScholarGoogle Scholar
  42. J. W. Machangpa and T. S. Chingtham. 2018. Head gesture controlled wheelchair for quadriplegic patients. Procedia Computer Science 132 (2018), 342--351.Google ScholarGoogle Scholar
  43. Michalos G., Makris S., Tsarouchi P., Guasch T., Kontovrakis D., and Chryssolouris G.. 2015. Design considerations for safe human-robot collaborative workplaces. Procedia CIRP 37 (2015), 248253.Google ScholarGoogle ScholarCross RefCross Ref
  44. Morato C., Kaipa K. N., Zhao B., and Gupta S. K.. 2014. Toward safe human-robot collaboration by using multiple Kinects based real-time human tracking. Journal of Computing and Information Science in Engineering 14, 1 (2014), 011006.Google ScholarGoogle ScholarCross RefCross Ref
  45. Murthy L. R. D. and Biswas P.. 2021. Appearance-based gaze estimation using attention and difference mechanism. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 31433152.Google ScholarGoogle Scholar
  46. Murthy L. R. D., Brahmadt S., Arjun S., and Biswas P.. 2021. Design and real-time evaluation of an appearance-based gaze estimation system. Journal of Eye Movement Research 14, 2 (2021), 10.16910/jemr.14.4.2.Google ScholarGoogle Scholar
  47. OptiTrack. 2021. OptiTrack Motion Capture Camera Flex13. Retrieved July 30, 2021 from https://optitrack.com/cameras/flex-13/.Google ScholarGoogle Scholar
  48. Palinko O., Rea F., Sandini G., and Sciutti A.. 2016. Robot reading human gaze: Why eye-tracking is better than head tracking for human-robot collaboration. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’16). IEEE, Los Alamitos, CA, 50485054.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Russell S. J., Norvig P., and Davis E.. 2010. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall, Upper Saddle River, NJ.Google ScholarGoogle Scholar
  50. M. Safeea, P. Neto, and R. Bearee. 2018. Efficient calculation of minimum distance between capsules and its use in robotics. IEEE Access 7 (2018), 5368--5373.Google ScholarGoogle Scholar
  51. Scalera L., Seriani S., Gallina P., Lentini M., and Gasparetto A.. 2021. Human–robot interaction through eye-tracking for artistic drawing. Robotics 10, 2 (2021), 54.Google ScholarGoogle ScholarCross RefCross Ref
  52. B. Schmidt and L. Wang. 2014. Depth camera-based collision avoidance via active robot control. Journal of Manufacturing Systems 33, 4 (2014), 711--718.Google ScholarGoogle Scholar
  53. Sharma V. K., Murthy L. R. D., and Biswas P.. 2020. Webcam controlled robotic arm for persons with SSMI. Technology and Disability 32 (3), 119.Google ScholarGoogle ScholarCross RefCross Ref
  54. Nelson Shea Roberta and Automation Rockwell. 2016. Collaborative robot technical specification ISO/TS 15066 Update. Retrieved April 20, 2022 from https://www.robotics.org/userAssets/riaUploads/file/12-TR15066Overview-SafetyforCollaborativeApplications-RobertaNelsonShea.pdf.Google ScholarGoogle Scholar
  55. Shree J., Saluja K. P. S., and Biswas P.. 2019. A case study of developing gaze controlled interface for users with severe speech and motor impairment. Technology and Disability 31, 1 (2019), 119.Google ScholarGoogle Scholar
  56. Siciliano B and Khatib O. (Eds.). 2016. Springer Handbook of Robotics. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  57. Sierra S. D., Jiménez M. F., Múnera M. C., Frizera-Neto A., and C. A. Cifuentes . 2019. Remote-operated multimodal interface for therapists during walker-assisted gait rehabilitation: A preliminary assessment. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE, Los Alamitos, CA, 528529.Google ScholarGoogle ScholarCross RefCross Ref
  58. Simonyan K. and Zisserman A.. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  59. Stiefelhagen R., Ekenel H. K., Fugen C., Gieselmann P., Holzapfel H., Kraft F., and Waibel A.. 2007. Enabling multimodal human–robot interaction for the Karlsruhe humanoid robot. IEEE Transactions on Robotics 23, 5 (2007), 840851.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Wang Lihui. 2019. Symbiotic human-robot collaborative assembly. CIRP Annals 68, 2 (2019), 701726.Google ScholarGoogle ScholarCross RefCross Ref
  61. Wang X. V., Seira A., and Wang L.. 2018. Classification, personalised safety framework and strategy for human-robot collaboration. In Proceedings of the 2018 International Conference on Computers and Industrial Engineering (CIE’18).Google ScholarGoogle Scholar
  62. Wang Y., Zeng H., Song A., Xu B., Li H., Zhu L., and Liu J.. 2017. Robotic arm control using hybrid brain-machine interface and augmented reality feedback. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER’17). IEEE, Los Alamitos, CA, 411414.Google ScholarGoogle ScholarCross RefCross Ref
  63. Zaira P. R.. 2014. The WAM Arm: Modelling, Control and Its Application in an HMI Based on Gaze Tracking. Ph.D. Thesis. University of Leicester.Google ScholarGoogle Scholar
  64. Zhang G., Hansen J. P., Minakata K., Alapetite A., and Wang Z.. 2019. Eye-gaze-controlled telepresence robots for people with motor disabilities. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction. 574575.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Zhang Z.. 2012. Microsoft Kinect sensor and its effect. IEEE Multimedia 19, 2 (2012), 410.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Comparing Two Safe Distance Maintenance Algorithms for a Gaze-Controlled HRI Involving Users with SSMI

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Accessible Computing
        ACM Transactions on Accessible Computing  Volume 15, Issue 3
        September 2022
        281 pages
        ISSN:1936-7228
        EISSN:1936-7236
        DOI:10.1145/3544005
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 8 July 2022
        • Online AM: 18 April 2022
        • Accepted: 1 April 2022
        • Revised: 1 February 2022
        • Received: 1 August 2021
        Published in taccess Volume 15, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format