skip to main content
10.1145/3216723.3216729acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article

Can we predict stressful technical interview settings through eye-tracking?

Published:15 June 2018Publication History

ABSTRACT

Recently, eye-tracking analysis for finding the cognitive load and stress while problem-solving on the whiteboard during a technical interview is finding its way in software engineering society. However, there is no empirical study on analyzing how much the interview setting characteristics affect the eye-movement measurements. Without knowing that, the results of a research on eye-movement measurements analysis for stress detection will not be reliable. In this paper, we analyzed the eye-movements of 11 participants in two interview settings, one on the whiteboard and the other on the paper, to find out if the characteristics of the interview settings affect the analysis of participants' stress. To this end, we applied 7 Machine Learning classification algorithms on three different labeling strategies of the data to suggest researchers of the domain a useful practice of checking the reliability of the eye-measurements before reporting any results.

References

  1. David W Aha, Dennis Kibler, and Marc K Albert. 1991. Instance-based learning algorithms. Machine learning 6, 1 (1991), 37--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mahnaz Behroozi, Alison Lui, Ian Moore, Denae Ford, and Chris Parnin. 2018. Dazed: Measuring the Cognitive Load of Solving Technical Interview Problems at the Whiteboard. In 40th International Conference on Software Engineering, NIER Track (ICSE'18). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Leo Breiman. 2001. Random forests. Machine learning 45, 1 (2001), 5--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Teresa Busjahn, Carsten Schulte, and Andreas Busjahn. 2011. Analysis of Code Reading to Gain More Insight in Program Comprehension. In Proceedings of the 11th Koli Calling International Conference on Computing Education Research (Koli Calling '11). ACM, New York, NY, USA, 1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Sara Caviola, Emma Carey, Irene C Mammarella, and Denes Szucs. 2017. Stress, Time Pressure, Strategy Selection and Math Anxiety in Mathematics: A Review of the Literature. Frontiers in psychology 8 (2017), 1488.Google ScholarGoogle Scholar
  6. Olivier Chapelle and S Sathiya Keerthi. 2011. Multi-class feature selection with support vector machines.Google ScholarGoogle Scholar
  7. Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research 16 (2002), 321--357. Google ScholarGoogle ScholarCross RefCross Ref
  8. Jane Dawson. 2003. Reflectivity, creativity, and the space for silence. Reflective Practice 4, 1 (2003), 33--39.Google ScholarGoogle ScholarCross RefCross Ref
  9. George Forman, Martin Scholz, and Shyamsundar Rajaram. 2009. Feature shaping for linear SVM classifiers. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 299--308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tom Foulsham. 2015. Eye movements and their functions in everyday tasks. Eye 29, 2 (2015), 196.Google ScholarGoogle ScholarCross RefCross Ref
  11. Thomas Fritz, Andrew Begel, Sebastian C Müller, Serap Yigit-Elliott, and Manuela Züger. 2014. Using psycho-physiological measures to assess task difficulty in software development. In Proceedings of the 36th International Conference on Software Engineering. ACM, 402--413. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter 11, 1 (2009), 10--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. George H John and Pat Langley. 1995. Estimating continuous distributions in Bayesian classifiers. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 338--345. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. Sathiya Keerthi, Shirish Krishnaj Shevade, Chiranjib Bhattacharyya, and Karuturi Radha Krishna Murthy. 2001. Improvements to Platt's SMO algorithm for SVM classifier design. Neural computation 13, 3 (2001), 637--649. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Ron Kohavi et al. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection.Google ScholarGoogle Scholar
  16. Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Predicting Confusion in Information Visualization from Eye Tracking and Interaction Data.Google ScholarGoogle Scholar
  17. Saskia Le Cessie and Johannes C Van Houwelingen. 1992. Ridge estimators in logistic regression. Applied statistics (1992), 191--201.Google ScholarGoogle Scholar
  18. David Martin Powers. 2011. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. (2011).Google ScholarGoogle Scholar
  19. Steven L Salzberg. 1994. C4. 5: Programs for machine learning by j. ross quinlan. morgan kaufmann publishers, inc., 1993. Machine Learning 16, 3 (1994), 235--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Ivan Tomek. 1976. Two modifications of CNN. 6 (11 1976).Google ScholarGoogle Scholar
  21. Ian H Witten, Eibe Frank, Mark A Hall, and Christopher J Pal. 2016. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Show-Jane Yen and Yue-Shi Lee. 2009. Cluster-based under-sampling approaches for imbalanced data distributions. Expert Systems with Applications 36, 3 (2009), 5718--5727. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Zhi-Qiang Zeng, Hong-Bin Yu, Hua-Rong Xu, Yan-Qi Xie, and Ji Gao. 2008. Fast training support vector machines using parallel sequential minimal optimization. In Intelligent System and Knowledge Engineering, 2008. ISKE 2008. 3rd International Conference on, Vol. 1. IEEE, 997--1001.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Can we predict stressful technical interview settings through eye-tracking?

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      EMIP '18: Proceedings of the Workshop on Eye Movements in Programming
      June 2018
      35 pages
      ISBN:9781450357920
      DOI:10.1145/3216723

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 June 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

      ETRA '24
      The 2024 Symposium on Eye Tracking Research and Applications
      June 4 - 7, 2024
      Glasgow , United Kingdom

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader