skip to main content
10.1145/3568294.3580146acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
short-paper
Public Access

Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals

Published: 13 March 2023 Publication History

Abstract

With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.

References

[1]
2021. Manufacturing Statistics - Occupational Hearing Loss. https://www.cdc. gov/niosh/topics/ohl/manufacturing.html
[2]
Neha Baranwal, Avinash Kumar Singh, and Gora Chand Nandi. 2017. Develop- ment of a framework for human-robot interactions with indian sign language using possibility theory. International Journal of Social Robotics 9, 4 (2017), 563--574.
[3]
Olivier Friard and Marco Gamba. 2016. boris: A free, versatile open-source event- logging software for video/audio coding and live observations. Methods in Ecology and Evolution 7, 11 (2016), 1325--1330. https://doi.org/10.1111/2041--210x.12584
[4]
Rinat Galin, Roman Meshcheryakov, Saniya Kamesheva, and Anna Samoshina. 2020. Cobots and the benefits of their implementation in intelligent manufacturing. IOP Conference Series: Materials Science and Engineering 862, 3 (may 2020), 032075. https://doi.org/10.1088/1757--99x/862/3/032075
[5]
Carrie Lou Garberoglio, Jeffrey Levi Palmer, Stephanie W Cawthon, and Adam Sales. 2019. Deaf people and employment in the United States: 2019. Technical Report. National Deaf Center on Postsecondary Outcomes.
[6]
P. A. Hancock, Theresa T. Kessler, Alexandra D. Kaplan, John C. Brill, and James L. Szalma. 2020. Evolving Trust in Robots: Specification through sequential and comparative meta-analyses. Human Factors: The Journal of the Human Fac- tors and Ergonomics Society 63, 7 (2020), 1196--1229. https://doi.org/10.1177/ 0018720820922080
[7]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology 52 (1988), 139--183.
[8]
J. Heard, C. E. Harriott, and J. A. Adams. 2017. A human workload assessment algorithm for collaborative human-machine teams. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 366--371.
[9]
J. Heard, C. E. Harriott, and J. A. Adams. 2018. A Survey of Workload Assessment Algorithms. IEEE Transactions on Human-Machine Systems 48, 5 (2018), 434--451.
[10]
Belal Hmedan, Dorilys Kilgus, Humbert Fiorino, Aurelie Landry, and Damien Pellier. 2022. Adapting Cobot Behavior to Human Task Ordering Variability for Assembly Tasks. 35 (2022).
[11]
Belal Hmedan, Dorilys Kilgus, Humbert Fiorino, Aurelie Landry, and Damien Pellier. 2022. Adapting Cobot Behavior to Human Task Ordering Variability for Assembly Tasks. In The International FLAIRS Conference Proceedings, Vol. 35.
[12]
Guy Hoffman. 2019. Evaluating fluency in human-robot collaboration. IEEE Transactions on Human-Machine Systems 49, 3 (2019), 209--218.
[13]
Seyed Ramezan Hosseini, Alireza Taheri, Minoo Alemi, and Ali Meghdari. 2021. One-shot learning from demonstration approach toward a reciprocal sign language-based HRI. International Journal of Social Robotics (2021), 1--13.
[14]
Kheng Lee Koay, Gabriella Lakatos, Dag Sverre Syrdal, Márta Gácsi, B Bereczky, Kerstin Dautenhahn, Adám Miklósi, and Michael L Walters. 2013. Hey! There is someone at your door. A hearing robot using visual communication signals of hearing dogs to communicate intent. In 2013 IEEE symposium on artificial life (ALife). IEEE, 90--97.
[15]
Tobias Kopp, Marco Baumgartner, and Steffen Kinkel. 2020. Success factors for introducing industrial human-robot interaction in practice: An empirically driven framework. The International Journal of Advanced Manufacturing Technology 112, 3--4 (2020), 685--704. https://doi.org/10.1007/s00170-020-06398-0
[16]
Heiner Lasi, Peter Fettke, Hans-Georg Kemper, Thomas Feld, and Michael Hoff- mann. 2014. Industry 4.0. Business & information systems engineering 6, 4 (2014), 239--242.
[17]
Iñaki Maurtua, Aitor Ibarguren, Johan Kildal, Loreto Susperregi, and Basilio Sierra. 2017. Human-robot collaboration in Industrial Applications. International Journal of Advanced Robotic Systems 14, 4 (2017), 172988141771601. https://doi. org/10.1177/1729881417716010
[18]
Christoph Mühlemeyer. 2020. Assessment and Design of Employees-Cobot- Interaction. In Human Interaction and Emerging Technologies, Tareq Ahram, Redha Taiar, Serge Colson, and Arnaud Choplin (Eds.). Springer International Publishing, Cham, 771--776.
[19]
Anup Nandy, Soumik Mondal, Jay Shankar Prasad, Pavan Chakraborty, and GC Nandi. 2010. Recognizing & interpreting Indian sign language gesture for human robot interaction. In 2010 international conference on computer and communication technology (ICCCT). IEEE, 712--717.
[20]
Alena Pauliková, Zdenka Gyurák Babel'ová, and Monika Ubárová. 2021. Analysis of the impact of human-cobot collaborative manufacturing implementation on the occupational health and safety and the quality requirements. International Journal of Environmental Research and Public Health 18, 4 (2021), 1927. https: //doi.org/10.3390/ijerph18041927
[21]
Eric Rosen, David Whitney, Elizabeth Phillips, Gary Chien, James Tompkin, George Konidaris, and Stefanie Tellex. 2020. Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays. Springer International Publishing, Cham, 301--316. https://doi.org/10.1007/978-3-030-28619-4_26
[22]
Dmitry Ryumin, Denis Ivanko, Alexandr Axyonov, Ildar Kagirov, Alexey Karpov, and Milos Zelezny. 2019. Human-robot interaction with smart shopping trolley using sign language: data collection. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 949--954.
[23]
Brian Scassellati, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo Merla, Ari Shapiro, et al. 2018. Teaching language to deaf infants with a robot and a virtual human. In Proceedings of the 2018 CHI Conference on human Factors in computing systems. 1--13.
[24]
Konrad Sowa, Aleksandra Przegalinska, and Leon Ciechanowski. 2021. Cobots in knowledge work: Human -- AI collaboration in managerial professions. Journal of Business Research 125 (2021), 135--142. https://doi.org/10.1016/j.jbusres.2020. 11.038
[25]
Nazgul Tazhigaliyeva, Yerniyaz Nurgabulov, German I Parisi, and Anara Sandygulova. 2016. Slirs: Sign language interpreting system for human-robot interaction. In 2016 AAAI Fall Symposium Series.
[26]
Qiao Wang, Dikai Liu, Marc G. Carmichael, Stefano Aldini, and Chin-Teng Lin. 2022. Computational Model of Robot Trust in Human Co-Worker for Physical Human-Robot Collaboration. IEEE Robotics and Automation Letters 7, 2 (2022), 3146--3153. https://doi.org/10.1109/LRA.2022.3145957
[27]
Pu Zheng, Pierre-Brice Wieber, Junaid Baber, and Olivier Aycard. 2022. Human Arm Motion Prediction for Collision Avoidance in a Shared Workspace. Sensors 22, 18 (2022). https://doi.org/10.3390/s22186951
[28]
Étienne Fournier, Dorilys Kilgus, Aurélie Landry, Belal Hmedan, Damien Pellier, Humbert Fiorino, and Christine Jeoffrion. 2022. The Impacts of Human-Cobot Collaboration on Perceived Cognitive Load and Usability during an Industrial Task: An Exploratory Experiment. IISE Transactions on Occupational Ergonomics and Human Factors 10, 2 (2022), 83--90. https://doi.org/10.1080/24725838.2022. 2072021 arXiv:https://doi.org/10.1080/24725838.2022.2072021 35485174.

Cited By

View all
  • (2024)Towards Inclusive Video Commenting: Introducing Signmaku for the Deaf and Hard-of-HearingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642287(1-18)Online publication date: 11-May-2024

Index Terms

  1. Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
      March 2023
      612 pages
      ISBN:9781450399708
      DOI:10.1145/3568294
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 March 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. cobot
      2. deaf
      3. hard of hearing
      4. human-robot interaction

      Qualifiers

      • Short-paper

      Funding Sources

      Conference

      HRI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 268 of 1,124 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)108
      • Downloads (Last 6 weeks)23
      Reflects downloads up to 21 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Towards Inclusive Video Commenting: Introducing Signmaku for the Deaf and Hard-of-HearingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642287(1-18)Online publication date: 11-May-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media