skip to main content
10.1145/3351529.3360653acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
abstract

Are Humans Biased in Assessment of Video Interviews?

Published:14 October 2019Publication History

ABSTRACT

Supervised systems require human labels for training. But, are humans themselves always impartial during the annotation process? We examine this question in the context of automated assessment of human behavioral tasks. Specifically, we investigate whether human ratings themselves can be trusted at their face value when scoring video-based structured interviews, and whether such ratings can impact machine learning models that use them as training data. We present preliminary empirical evidence that indicates there are biases in such annotations, most of which are visual in nature.

References

  1. Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, and Chong Min Lee. 2014. Towards automated assessment of public speaking skills using multimodal cues. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, 200–203.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Lei Chen, Ru Zhao, Chee Wee Leong, Blair Lehman, Gary Feng, and Mohammed Ehsan Hoque. 2017. Automated video interview judgment on a large-sized corpus collected online. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 504–509.Google ScholarGoogle ScholarCross RefCross Ref
  3. Jose M Cortina, Nancy B Goldstein, Stephanie C Payne, H Kristl Davison, and Stephen W Gilliland. 2000. The incremental validity of interview scores over and above cognitive ability and conscientiousness scores. Personnel Psychology 53, 2 (2000), 325–351.Google ScholarGoogle ScholarCross RefCross Ref
  4. Allen I Huffcutt, James M Conway, Philip L Roth, and Nancy J Stone. 2001. Identification and meta-analytic assessment of psychological constructs measured in employment interviews.Journal of Applied Psychology 86, 5 (2001), 897.Google ScholarGoogle Scholar
  5. Julia Levashina, Christopher J Hartwell, Frederick P Morgeson, and Michael A Campion. 2014. The structured employment interview: Narrative and quantitative review of the research literature. Personnel Psychology 67, 1 (2014), 241–293.Google ScholarGoogle ScholarCross RefCross Ref
  6. Iftekhar Naim, M Iftekhar Tanveer, Daniel Gildea, and Mohammed Ehsan Hoque. 2015. Automated prediction and analysis of job interview performance: The role of what you say and how you say it. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. 1. IEEE, 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  7. Laurent Son Nguyen, Denise Frauendorfer, Marianne Schmid Mast, and Daniel Gatica-Perez. 2014. Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE transactions on multimedia 16, 4 (2014), 1018–1031.Google ScholarGoogle Scholar
  8. Laurent Son Nguyen and Daniel Gatica-Perez. 2015. I would hire you in a minute: Thin slices of nonverbal behavior in job interviews. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 51–58.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Terence Parr. 2018. Feature importances for scikit random forests. https://github.com/parrt/random-forest-importancesGoogle ScholarGoogle Scholar
  10. Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng, and David Suendermann-Oeft. 2015. Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 23–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135–1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lisa M Schreiber, Gregory D Paul, and Lisa R Shibley. 2012. The development and test of the public speaking competence rubric. Communication Education 61, 3 (2012), 205–233.Google ScholarGoogle ScholarCross RefCross Ref
  13. Rutuja Ubale, Yao Qian, and Keelan Evanini. 2018. Exploring End-To-End Attention-Based Neural Networks For Native Language Identification. In 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 84–91.Google ScholarGoogle Scholar
  14. David H Wolpert. 1992. Stacked generalization. Neural networks 5, 2 (1992), 241–259.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ICMI '19: Adjunct of the 2019 International Conference on Multimodal Interaction
    October 2019
    86 pages
    ISBN:9781450369374
    DOI:10.1145/3351529

    Copyright © 2019 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 October 2019

    Check for updates

    Qualifiers

    • abstract
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate453of1,080submissions,42%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format