ABSTRACT
This paper presents an IRB-approved human study to capture data to build models for human frustration prediction of computer users. First, an application was developed that ran in the user’s computer/laptop/VM with Linux 20.04. Then, the application collected a variety of data from their computers, including: mouse clicks, movements and scrolls; the pattern of keyboard keys clicks; user audio features; and head movements through the user video; System-wide information such as computation, memory usage, network bandwidth, and input/output bandwidth of the running applications in the computer and user frustrations. Finally, the application sent the data to the cloud. After two weeks of data collection, supervised and semi-supervised models were developed offline to predict user frustration with the computer using the collected data. A semi-supervised model using a generative adversarial network (GAN) resulted in the highest accuracy of 90%.
- Bidyut Bikash Hazarika, Mohammadreza Mousavizadeh, and Mike Tarn. 2019. A comparison of hedonic and utilitarian digital products based on consumer evaluation and technology frustration. JISTEM-Journal of Information Systems and Technology Management 16 (2019).Google Scholar
- Scott Brave and Cliff Nass. 2007. Emotion in human-computer interaction. In The human-computer interaction handbook. CRC Press, 103–118.Google Scholar
- Linqin Cai, Yaxin Hu, Jiangong Dong, and Sitong Zhou. 2019. Audio-textual emotion recognition based on improved neural networks. Mathematical Problems in Engineering 2019 (2019).Google Scholar
- George Chalhoub, Martin J Kraemer, Norbert Nthala, and Ivan Flechais. 2021. “It Did Not Give Me an Option to Decline”: A Longitudinal Analysis of the User Experience of Security and Privacy in Smart Home Products. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 555, 16 pages. https://doi.org/10.1145/3411764.3445691Google ScholarDigital Library
- O Chapelle, B Schölkopf, and A Zien. 2006. Semi-Supervised Learning Cambridge.Google Scholar
- Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. 2018. Generative adversarial networks: An overview. IEEE signal processing magazine 35, 1 (2018), 53–65.Google Scholar
- Théo Deschamps-Berger, Lori Lamel, and Laurence Devillers. 2021. End-to-end speech emotion recognition: challenges of real-life emergency call centers data recordings. In 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 1–8.Google ScholarCross Ref
- Clayton Epp, Michael Lippold, and Regan L Mandryk. 2011. Identifying emotional states using keystroke dynamics. In Proceedings of the sigchi conference on human factors in computing systems. 715–724.Google ScholarDigital Library
- Paul Freihaut, Anja S Göritz, Christoph Rockstroh, and Johannes Blum. 2021. Tracking stress via the computer mouse? Promises and challenges of a potential behavioral stress marker. Behavior Research Methods 53, 6 (2021), 2281–2301.Google ScholarCross Ref
- Surjya Ghosh. 2017. Emotion-aware computing using smartphone. In 2017 9th International Conference on Communication Systems and Networks (COMSNETS). IEEE, 592–593.Google ScholarCross Ref
- Sepideh Goodarzy, Maziyar Nazari, Richard Han, Eric Keller, and Eric Rozner. 2021. SmartOS: Towards Automated Learning and User-Adaptive Resource Allocation in Operating Systems. In Proceedings of the 12th ACM SIGOPS Asia-Pacific Workshop on Systems (Hong Kong, China) (APSys ’21). Association for Computing Machinery, New York, NY, USA, 48–55. https://doi.org/10.1145/3476886.3477519Google ScholarDigital Library
- Hatice Gunes and Maja Pantic. 2010. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In International conference on intelligent virtual agents. Springer, 371–377.Google ScholarCross Ref
- Vedika Gupta, Stuti Juyal, Gurvinder Pal Singh, Chirag Killa, and Nishant Gupta. 2020. Emotion recognition of audio/speech data using deep learning approaches. Journal of Information and Optimization Sciences 41, 6 (2020), 1309–1317.Google ScholarCross Ref
- Ebba Håkansson and Elizabeth Bjarnason. 2020. Including human factors and ergonomics in requirements engineering for digital work environments. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH). IEEE, 57–66.Google ScholarCross Ref
- Pavol Harár, Radim Burget, and Malay Kishore Dutta. 2017. Speech emotion recognition with deep learning. In 2017 4th International conference on signal processing and integrated networks (SPIN). IEEE, 137–140.Google ScholarCross Ref
- Martin Thomas Hibbeln, Jeffrey L Jenkins, Christoph Schneider, Joseph Valacich, and Markus Weinmann. 2017. How is your user feeling? Inferring emotion through human-computer interaction devices. Mis Quarterly 41, 1 (2017), 1–21.Google ScholarDigital Library
- Preeti Khanna and Mukundan Sasikumar. 2010. Recognising emotions from keyboard stroke pattern. International journal of computer applications 11, 9(2010), 1–5.Google Scholar
- Jonathan Lazar, Adam Jones, and Ben Shneiderman. 2006. Workplace user frustration with computers: An exploratory investigation of the causes and severity. Behaviour & Information Technology 25, 03 (2006), 239–251.Google ScholarCross Ref
- Margaret Lech, Melissa Stolar, Christopher Best, and Robert Bolia. 2020. Real-time speech emotion recognition using a pre-trained image classification network: Effects of bandwidth reduction and companding. Frontiers in Computer Science 2 (2020), 14.Google ScholarCross Ref
- Po-Ming Lee, Wei-Hsuan Tsui, and Tzu-Chien Hsiao. 2015. The influence of emotion on keyboard typing: an experimental study using auditory stimuli. PloS one 10, 6 (2015), e0129056.Google ScholarCross Ref
- Mao Li, Bo Yang, Joshua Levy, Andreas Stolcke, Viktor Rozgic, Spyros Matsoukas, Constantinos Papayiannis, Daniel Bone, and Chao Wang. 2021. Contrastive unsupervised learning for speech emotion recognition. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6329–6333.Google ScholarCross Ref
- Zheng Lian, Jianhua Tao, Bin Liu, Jian Huang, Zhanlei Yang, and Rongjun Li. 2020. Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition.. In INTERSPEECH. 394–398.Google Scholar
- Yu Liang, Jinheng Li, Rachata Ausavarungnirun, Riwei Pan, Liang Shi, Tei-Wei Kuo, and Chun Jason Xue. 2020. Acclaim: Adaptive Memory Reclaim to Improve User Experience in Android Systems. In 2020 USENIX Annual Technical Conference (USENIX ATC 20). USENIX Association, 897–910. https://www.usenix.org/conference/atc20/presentation/liang-yuGoogle Scholar
- Jean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma, and Alexandra Fedorova. 2016. The Linux Scheduler: A Decade of Wasted Cores. In Proceedings of the Eleventh European Conference on Computer Systems (London, United Kingdom) (EuroSys ’16). Association for Computing Machinery, New York, NY, USA, Article 1, 16 pages. https://doi.org/10.1145/2901318.2901326Google ScholarDigital Library
- Surbhi Madan, Monika Gahalawat, Tanaya Guha, and Ramanathan Subramanian. 2021. Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics. In Proceedings of the 2021 International Conference on Multimodal Interaction. 435–443.Google ScholarDigital Library
- Edmilson Morais, Ron Hoory, Weizhong Zhu, Itai Gat, Matheus Damasceno, and Hagai Aronowitz. 2022. Speech emotion recognition using self-supervised features. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6922–6926.Google ScholarCross Ref
- Avar Pentel. 2015. Patterns of Confusion: Using Mouse Logs to Predict User’s Emotional State.. In UMAP Workshops.Google Scholar
- Mahwish Pervaiz and Tamim Ahmed Khan. 2016. Emotion recognition from speech using prosodic and linguistic features. International Journal of Advanced Computer Science and Applications 7, 8(2016).Google ScholarCross Ref
- Koustuv Saha, Yozen Liu, Nicholas Vincent, Farhan Asif Chowdhury, Leonardo Neves, Neil Shah, and Maarten W. Bos. 2021. AdverTiming Matters: Examining User Ad Consumption for Effective Ad Allocations on Social Media. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 581, 18 pages. https://doi.org/10.1145/3411764.3445394Google ScholarDigital Library
- Sergio Salmeron-Majadas, Olga C Santos, and Jesus G Boticario. 2014. Exploring indicators from keyboard and mouse interactions to predict the user affective state. In Educational Data Mining 2014.Google Scholar
- Mehmet Cenk Sezgin, Bilge Günsel, and Güneş Karabulut Kurt. 2011. A novel perceptual feature set for audio emotion recognition. In 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 780–785.Google ScholarCross Ref
- Mandeep Singh and Yuan Fang. 2020. Emotion recognition in audio and video using deep neural networks. arXiv preprint arXiv:2006.08129(2020).Google Scholar
- Oren Wright. 2019. Emotion Recognition from Voice in the Wild. Technical Report. CARNEGIE-MELLON UNIV PITTSBURGH PA PITTSBURGH United States.Google Scholar
- Takashi Yamauchi, Anton Leontyev, and Moein Razavi. 2019. Assessing emotion by mouse-cursor tracking: Theoretical and empirical rationales. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 89–95.Google ScholarCross Ref
- Takashi Yamauchi and Kunchen Xiao. 2018. Reading emotion from mouse cursor motions: Affective computing approach. Cognitive science 42, 3 (2018), 771–819.Google Scholar
- Philippe Zimmermann, Sissel Guttormsen, Brigitta Danuser, and Patrick Gomez. 2003. Affective computing—a rationale for measuring mood with mouse and keyboard. International journal of occupational safety and ergonomics 9, 4(2003), 539–551.Google Scholar
Index Terms
- Capturing and Predicting User Frustration to Support a Smart Operating System
Recommendations
Frustration: Still a Common User Experience
When computers unexpectedly delay or thwart goal attainment, frustration ensues. The central studies of the extent, content, and impact of such frustration were done more than 15 years ago. We revisit this issue after computers have become more mature and ...
Severity and impact of computer user frustration: A comparison of student and workplace users
User frustration with information and computing technology is a pervasive and persistent problem. When computers crash, network congestion causes delays, and poor user interfaces trigger confusion there are dramatic consequences for individuals, ...
The effectiveness of social agents in reducing user frustration
CHI EA '06: CHI '06 Extended Abstracts on Human Factors in Computing SystemsA study was conducted to evaluate the effectiveness of social agents in reducing user frustration. The particular type of agent studied reacted to users' facial expressions while they browsed through a shopping website. While highly frustrated users ...
Comments