skip to main content
10.1145/3316782.3316784acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Towards skill recognition using eye-hand coordination in industrial production

Published: 05 June 2019 Publication History

Abstract

Companies are re-focusing and making use of human labor [4, 21] in order to create individualized lot-size-1 products and not produce the exact same mass product again and again. While human workers can produce with a least with the same quality as machines do, they are not that consistent, so it's better to combine strengths of both men and machines [14]. In this work, we investigate how we can utilize how humans behave in relation to task-required skills levels. To do so we investigate hand-eye coordination on precision tasks, its relation to fine and gross motor skills, in an unconstrained industrial setting. This setting consists of an up to 22 tasks assembly processes of two variants of a high quality product. We establish that there is a high correlation between expected task required skill level and the captured hand eye coordination of expert factory workers and that hand eye coordination can be used to distinguish between fine and gross motor skills. In addition we provide insights how this can be exploited in future work.

References

[1]
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.
[2]
Peter Abeles. 2016. BoofCV. http://boofcv.org.
[3]
Sven Bambach, Stefan Lee, David Crandall, and Chen Yu. 2015. Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions. In 2015 IEEE International Conference on Computer Vision (ICCV).
[4]
Elisabeth Behrmann and Christoph Rauwald. 2016. Mercedes Boots Robots From the Production Line. (2016). https://www.bloomberg.com/news/articles/2016-02-25/why-mercedes-is-halting-robots-reign-on-the-production-line Accessed: 2017-02-01.
[5]
Hans-Joachim Bieg, Lewis L. Chuang, Roland W. Fleming, Harald Reiterer, and Heinrich H. Bülthoff. 2010. Eye and Pointer Coordination in Search and Selection Tasks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA '10). ACM, New York, NY, USA, 89--92.
[6]
Andreas Bulling, Ulf Blanke, Desney Tan, Jun Rekimoto, and Gregory Abowd. 2015. Introduction to the Special Issue on Activity Recognition for Interaction. ACM Trans. Interact. Intell. Syst. 4, 4 (Jan. 2015), 16e:1--16e:3.
[7]
Graham Cheetham and Geoff Chivers. 2001. How professionals learn in practice: an investigation of informal learning amongst people working in professions. Journal of European Industrial Training 25, 5 (2001), 247--292.
[8]
L. Chen, J. Hoey, C. D. Nugent, D. J. Cook, and Z. Yu. 2012. Sensor-Based Activity Recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (Nov 2012), 790--808.
[9]
Allan Collins. 2005. Cognitive apprenticeship. In The Cambridge handbook of the learning sciences, R Keith Sawyer (Ed.). Cambridge University Press.
[10]
J. Douglas Crawford, W. Pieter Medendorp, and Jonathan J. Marotta. 2004. Spatial transformations for eye-hand coordination. Journal of neurophysiology 92, 1 (2004), 10--19.
[11]
Michel C. Desmarais and Ryan S. Baker. 2012. A Review of Recent Advances in Learner and Skill Modeling in Intelligent Learning Environments. User Modeling and User-Adapted Interaction 22, 1--2 (apr 2012), 9--38.
[12]
Arlene Dohm. 2000. Guaging the labor force effects of retiring baby-boomers. Monthly Lab. Rev. 123 (2000), 17.
[13]
A. Ferscha. 2014. Attention, Please! IEEE Pervasive Computing 13, 1 (February 2014).
[14]
Paul M. Fitts. 1951. Human engineering for an effective air-navigation and traffic-control system. (1951).
[15]
Emma Gowen and R. Chris Miall. 2006. Eye-hand interactions in tracing and drawing tasks. Human Movement Science 25, 4 (2006), 568 -- 585.
[16]
Paul L. Gribble, Stefan Everling, Kristen Ford, and Andrew Mattar. 2002. Hand-eye coordination for rapid pointing movements. Experimental Brain Research 145, 3 (Aug 2002), 372--382.
[17]
Reactive Streams Special Interest Group. 2018. Reactive Streams. http://www.reactive-streams.org/ Accessed: 2018-06-18.
[18]
Michael Haslgrübler, Peter Fritz, Benedikt Gollan, and Alois Ferscha. 2017. Getting Through - Modality Selection in a Multi-Sensor-Actuator Industrial IoT Environment. In Proceedings of the 7th International Conference on the Internet of Things. ACM, 8.
[19]
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.
[20]
Weiming Hu, Tieniu Tan, Liang Wang, and Steve Maybank. 2004. A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 34, 3 (2004), 334--352.
[21]
Dana Hull. 2018. Musk Says Excessive Automation Was 'My Mistake'. (2018). https://www.bloomberg.com/news/articles/2018-04-13/musk-tips-his-tesla-cap-to-humans-after-robots-undercut-model-3 Accessed: 2018-06-19.
[22]
iMatix Corporation. 2014. ZMQ Distributed Messaging. http://zeromq.org.
[23]
Quintus R. Jett and Jennifer M. George. 2003. Work Interrupted: A Closer Look at the Role of Interruptions in Organizational Life. Academy of Management Review 28, 3 (2003), 494--507.
[24]
Roland S. Johansson, Göran Westling, Anders Bäckström, and J. Randall Flanagan. 2001. Eye-Hand Coordination in Object Manipulation. Journal of Neuroscience 21, 17 (2001), 6917--6932.
[25]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication. 1151--1160.
[26]
Wilhelmine Koerth. 1922. A pursuit apparatus: Eye-hand coordination. Psychological Monographs 31, 1 (1922), 288.
[27]
Michael Land and Benjamin Tatler. 2009. Looking and acting: vision and eye movements in natural behaviour. Oxford University Press.
[28]
Oscar D. Lara and Miguel A. Labrador. 2013. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Communications Surveys Tutorials 15, 3 (Third 2013), 1192--1209.
[29]
Michael A. Lawrence. 2016. Package 'ez': Easy Analysis and Visualization of Factorial Experiments. https://cran.r-project.org/package=ez. Accessed: 2017-12-15.
[30]
R. A. Magill. 1993. Motor learning: Concepts and applications (4th ed.). Madison: Brown and Benchmark.
[31]
Wilson Mark, Coleman Mark, and McGrath John. 2010. Developing basic hand-eye coordination skills for laparoscopic surgery using gaze training. BJU International 105, 10 (2010), 1356--1358.
[32]
Leigh A. Mrotek and John F. Soechting. 2007. Target Interception: Hand-Eye Coordination and Strategies. Journal of Neuroscience 27, 27 (2007), 7297--7309.
[33]
Jeff Pelz, Mary Hayhoe, and Russ Loeber. 2001. The coordination of eye, head, and hand movements in a natural task. Experimental Brain Research 139, 3 (Aug 2001), 266--277.
[34]
Marie-Laure Kaiser PhD, Jean-Michel Albaret PhD, and Pierre-André Doudin PhD. 2009. Relationship Between Visual-Motor Integration, Eye-Hand Coordination, and Quality of Handwriting. Journal of Occupational Therapy, Schools, & Early Intervention 2, 2 (2009), 87--95.
[35]
Inc Pivotal Software. 2018. Project Reactor. https://projectreactor.io/ Accessed: 2018-06-18.
[36]
Ronald Poppe. 2010. A survey on vision-based human action recognition. Image and vision computing 28, 6 (2010), 976--990.
[37]
R Development Core Team. 2008. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org ISBN 3-900051-07-0.
[38]
Brandon Rigoni and Amy Adkins. 2015. As Baby Boomers Retire, It's Time to Replenish Talent. (2015). http://news.gallup.com/businessjournal/181295/baby-boomers-retire-time-replenish-talent.aspx Accessed: 2018-02-01.
[39]
D. Roggen, K. Forster, A. Calatroni, T. Holleczek, Y. Fang, G. Troster, A. Ferscha, C. Holzmann, A. Riener, P. Lukowicz, G. Pirkl, D. Bannach, K. Kunze, R. Chavarriaga, and J. d. R. Millan. 2009. OPPORTUNITY: Towards opportunistic activity and context recognition systems. In 2009 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks Workshops. 1--6.
[40]
Uta Sailer, Thomas Eggert, Jochen Ditterich, and Andreas Straube. 2000. Spatial and temporal aspects of eye-hand coordination across different tasks. Experimental Brain Research 134, 2 (Sep 2000), 163--173.
[41]
Olga C. Santos. 2016. Training the Body: The Potential of AIED to Support Personalized Motor Skills Learning. International Journal of Artificial Intelligence in Education 26, 2 (2016), 730--755.
[42]
Thomas Stiefmeier, Daniel Roggen, Georg Ogris, Paul Lukowicz, and Gerhard Tröster. 2008. Wearable activity tracking in car manufacturing. IEEE Pervasive Computing 7, 2 (2008).
[43]
Marvin Teichmann, Michael Weber, Marius Zoellner, Roberto Cipolla, and Raquel Urtasun. 2016. MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving. arXiv preprint arXiv:1612.07695 (2016).
[44]
Pavan Turaga, Rama Chellappa, Venkatramana S Subrahmanian, and Octavian Udrea. 2008. Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video technology 18, 11 (2008), 1473.
[45]
Athanasios Voulodimos, Dimitrios Kosmopoulos, Georgios Vasileiou, Emmanuel Sardis, Vasileios Anagnostopoulos, Constantinos Lalos, Anastasios Doulamis, and Theodora Varvarigou. 2012. A threefold dataset for activity and workflow recognition in complex industrial environments. IEEE MultiMedia 19, 3 (2012), 42--52.
[46]
Andreas Wendemuth and Susanne Biundo. 2012. A companion technology for cognitive technical systems. In Cognitive behavioural systems. Springer, 89--103.
[47]
Daniel M. Wolpert, Jörn Diedrichsen, and J. Randall Flanagan. 2011. Principles of sensorimotor learning. Nature Reviews Neuroscience 12, 12 (2011).
[48]
Dean Wyatte and Thomas Busey. 2008. Low and high level changes in eye gaze behavior as a result of expertise. Journal of Vision 8, 6 (2008), 112--112.
[49]
Shohei Yamaguchi, Kozo Konishi, Takefumi Yasunaga, Daisuke Yoshida, Nao Kinjo, Kiichiro Kobayashi, Satoshi Ieiri, Ken Okazaki, Hideaki Nakashima, Kazuo Tanoue, Yoshihiko Maehara, and Makoto Hashizume. 2007. Construct validity for eye-hand coordination skill on a virtual reality laparoscopic surgical simulator. Surgical Endoscopy 21, 12 (Dec 2007), 2253--2257.
[50]
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, 818--833.

Cited By

View all
  • (2024)Automated Assessment and Adaptive Multimodal Formative Feedback Improves Psychomotor Skills Training Outcomes in Quadrotor TeleoperationProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3688322(185-194)Online publication date: 24-Nov-2024
  • (2024)Eye-tracking support for analyzing human factors in human-robot collaboration during repetitive long-duration assembly processesProduction Engineering10.1007/s11740-024-01294-y19:1(47-64)Online publication date: 20-Jun-2024
  • (2022)Towards Flexible and Cognitive Production—Addressing the Production ChallengesApplied Sciences10.3390/app1217869612:17(8696)Online publication date: 30-Aug-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '19: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments
June 2019
655 pages
ISBN:9781450362320
DOI:10.1145/3316782
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eye tracking
  2. learning
  3. skill recognition
  4. wearable and pervasive computing

Qualifiers

  • Research-article

Funding Sources

  • Österreichische Forschungsförderungsgesellschaft

Conference

PETRA '19

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)2
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Automated Assessment and Adaptive Multimodal Formative Feedback Improves Psychomotor Skills Training Outcomes in Quadrotor TeleoperationProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3688322(185-194)Online publication date: 24-Nov-2024
  • (2024)Eye-tracking support for analyzing human factors in human-robot collaboration during repetitive long-duration assembly processesProduction Engineering10.1007/s11740-024-01294-y19:1(47-64)Online publication date: 20-Jun-2024
  • (2022)Towards Flexible and Cognitive Production—Addressing the Production ChallengesApplied Sciences10.3390/app1217869612:17(8696)Online publication date: 30-Aug-2022
  • (2022)Finger Joint Angle Estimation With Visual Attention for Rehabilitation Support: A Case Study of the Chopsticks Manipulation TestIEEE Access10.1109/ACCESS.2022.320189410(91316-91331)Online publication date: 2022
  • (2022)Opportunities for using eye tracking technology in manufacturing and logisticsComputers and Industrial Engineering10.1016/j.cie.2022.108444171:COnline publication date: 1-Sep-2022
  • (2021)The INCLUSIVE System: A General Framework for Adaptive Industrial AutomationIEEE Transactions on Automation Science and Engineering10.1109/TASE.2020.302787618:4(1969-1982)Online publication date: Oct-2021
  • (2021)An experimental study on augmented reality assisted manual assembly with occluded componentsJournal of Manufacturing Systems10.1016/j.jmsy.2021.04.003Online publication date: Apr-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media