skip to main content
10.1145/3581754.3584124acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
poster

Capturing and Predicting User Frustration to Support a Smart Operating System

Published:27 March 2023Publication History

ABSTRACT

This paper presents an IRB-approved human study to capture data to build models for human frustration prediction of computer users. First, an application was developed that ran in the user’s computer/laptop/VM with Linux 20.04. Then, the application collected a variety of data from their computers, including: mouse clicks, movements and scrolls; the pattern of keyboard keys clicks; user audio features; and head movements through the user video; System-wide information such as computation, memory usage, network bandwidth, and input/output bandwidth of the running applications in the computer and user frustrations. Finally, the application sent the data to the cloud. After two weeks of data collection, supervised and semi-supervised models were developed offline to predict user frustration with the computer using the collected data. A semi-supervised model using a generative adversarial network (GAN) resulted in the highest accuracy of 90%.

References

  1. Bidyut Bikash Hazarika, Mohammadreza Mousavizadeh, and Mike Tarn. 2019. A comparison of hedonic and utilitarian digital products based on consumer evaluation and technology frustration. JISTEM-Journal of Information Systems and Technology Management 16 (2019).Google ScholarGoogle Scholar
  2. Scott Brave and Cliff Nass. 2007. Emotion in human-computer interaction. In The human-computer interaction handbook. CRC Press, 103–118.Google ScholarGoogle Scholar
  3. Linqin Cai, Yaxin Hu, Jiangong Dong, and Sitong Zhou. 2019. Audio-textual emotion recognition based on improved neural networks. Mathematical Problems in Engineering 2019 (2019).Google ScholarGoogle Scholar
  4. George Chalhoub, Martin J Kraemer, Norbert Nthala, and Ivan Flechais. 2021. “It Did Not Give Me an Option to Decline”: A Longitudinal Analysis of the User Experience of Security and Privacy in Smart Home Products. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 555, 16 pages. https://doi.org/10.1145/3411764.3445691Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. O Chapelle, B Schölkopf, and A Zien. 2006. Semi-Supervised Learning Cambridge.Google ScholarGoogle Scholar
  6. Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. 2018. Generative adversarial networks: An overview. IEEE signal processing magazine 35, 1 (2018), 53–65.Google ScholarGoogle Scholar
  7. Théo Deschamps-Berger, Lori Lamel, and Laurence Devillers. 2021. End-to-end speech emotion recognition: challenges of real-life emergency call centers data recordings. In 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 1–8.Google ScholarGoogle ScholarCross RefCross Ref
  8. Clayton Epp, Michael Lippold, and Regan L Mandryk. 2011. Identifying emotional states using keystroke dynamics. In Proceedings of the sigchi conference on human factors in computing systems. 715–724.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Paul Freihaut, Anja S Göritz, Christoph Rockstroh, and Johannes Blum. 2021. Tracking stress via the computer mouse? Promises and challenges of a potential behavioral stress marker. Behavior Research Methods 53, 6 (2021), 2281–2301.Google ScholarGoogle ScholarCross RefCross Ref
  10. Surjya Ghosh. 2017. Emotion-aware computing using smartphone. In 2017 9th International Conference on Communication Systems and Networks (COMSNETS). IEEE, 592–593.Google ScholarGoogle ScholarCross RefCross Ref
  11. Sepideh Goodarzy, Maziyar Nazari, Richard Han, Eric Keller, and Eric Rozner. 2021. SmartOS: Towards Automated Learning and User-Adaptive Resource Allocation in Operating Systems. In Proceedings of the 12th ACM SIGOPS Asia-Pacific Workshop on Systems (Hong Kong, China) (APSys ’21). Association for Computing Machinery, New York, NY, USA, 48–55. https://doi.org/10.1145/3476886.3477519Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hatice Gunes and Maja Pantic. 2010. Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In International conference on intelligent virtual agents. Springer, 371–377.Google ScholarGoogle ScholarCross RefCross Ref
  13. Vedika Gupta, Stuti Juyal, Gurvinder Pal Singh, Chirag Killa, and Nishant Gupta. 2020. Emotion recognition of audio/speech data using deep learning approaches. Journal of Information and Optimization Sciences 41, 6 (2020), 1309–1317.Google ScholarGoogle ScholarCross RefCross Ref
  14. Ebba Håkansson and Elizabeth Bjarnason. 2020. Including human factors and ergonomics in requirements engineering for digital work environments. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH). IEEE, 57–66.Google ScholarGoogle ScholarCross RefCross Ref
  15. Pavol Harár, Radim Burget, and Malay Kishore Dutta. 2017. Speech emotion recognition with deep learning. In 2017 4th International conference on signal processing and integrated networks (SPIN). IEEE, 137–140.Google ScholarGoogle ScholarCross RefCross Ref
  16. Martin Thomas Hibbeln, Jeffrey L Jenkins, Christoph Schneider, Joseph Valacich, and Markus Weinmann. 2017. How is your user feeling? Inferring emotion through human-computer interaction devices. Mis Quarterly 41, 1 (2017), 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Preeti Khanna and Mukundan Sasikumar. 2010. Recognising emotions from keyboard stroke pattern. International journal of computer applications 11, 9(2010), 1–5.Google ScholarGoogle Scholar
  18. Jonathan Lazar, Adam Jones, and Ben Shneiderman. 2006. Workplace user frustration with computers: An exploratory investigation of the causes and severity. Behaviour & Information Technology 25, 03 (2006), 239–251.Google ScholarGoogle ScholarCross RefCross Ref
  19. Margaret Lech, Melissa Stolar, Christopher Best, and Robert Bolia. 2020. Real-time speech emotion recognition using a pre-trained image classification network: Effects of bandwidth reduction and companding. Frontiers in Computer Science 2 (2020), 14.Google ScholarGoogle ScholarCross RefCross Ref
  20. Po-Ming Lee, Wei-Hsuan Tsui, and Tzu-Chien Hsiao. 2015. The influence of emotion on keyboard typing: an experimental study using auditory stimuli. PloS one 10, 6 (2015), e0129056.Google ScholarGoogle ScholarCross RefCross Ref
  21. Mao Li, Bo Yang, Joshua Levy, Andreas Stolcke, Viktor Rozgic, Spyros Matsoukas, Constantinos Papayiannis, Daniel Bone, and Chao Wang. 2021. Contrastive unsupervised learning for speech emotion recognition. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6329–6333.Google ScholarGoogle ScholarCross RefCross Ref
  22. Zheng Lian, Jianhua Tao, Bin Liu, Jian Huang, Zhanlei Yang, and Rongjun Li. 2020. Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition.. In INTERSPEECH. 394–398.Google ScholarGoogle Scholar
  23. Yu Liang, Jinheng Li, Rachata Ausavarungnirun, Riwei Pan, Liang Shi, Tei-Wei Kuo, and Chun Jason Xue. 2020. Acclaim: Adaptive Memory Reclaim to Improve User Experience in Android Systems. In 2020 USENIX Annual Technical Conference (USENIX ATC 20). USENIX Association, 897–910. https://www.usenix.org/conference/atc20/presentation/liang-yuGoogle ScholarGoogle Scholar
  24. Jean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma, and Alexandra Fedorova. 2016. The Linux Scheduler: A Decade of Wasted Cores. In Proceedings of the Eleventh European Conference on Computer Systems (London, United Kingdom) (EuroSys ’16). Association for Computing Machinery, New York, NY, USA, Article 1, 16 pages. https://doi.org/10.1145/2901318.2901326Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Surbhi Madan, Monika Gahalawat, Tanaya Guha, and Ramanathan Subramanian. 2021. Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics. In Proceedings of the 2021 International Conference on Multimodal Interaction. 435–443.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Edmilson Morais, Ron Hoory, Weizhong Zhu, Itai Gat, Matheus Damasceno, and Hagai Aronowitz. 2022. Speech emotion recognition using self-supervised features. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6922–6926.Google ScholarGoogle ScholarCross RefCross Ref
  27. Avar Pentel. 2015. Patterns of Confusion: Using Mouse Logs to Predict User’s Emotional State.. In UMAP Workshops.Google ScholarGoogle Scholar
  28. Mahwish Pervaiz and Tamim Ahmed Khan. 2016. Emotion recognition from speech using prosodic and linguistic features. International Journal of Advanced Computer Science and Applications 7, 8(2016).Google ScholarGoogle ScholarCross RefCross Ref
  29. Koustuv Saha, Yozen Liu, Nicholas Vincent, Farhan Asif Chowdhury, Leonardo Neves, Neil Shah, and Maarten W. Bos. 2021. AdverTiming Matters: Examining User Ad Consumption for Effective Ad Allocations on Social Media. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 581, 18 pages. https://doi.org/10.1145/3411764.3445394Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Sergio Salmeron-Majadas, Olga C Santos, and Jesus G Boticario. 2014. Exploring indicators from keyboard and mouse interactions to predict the user affective state. In Educational Data Mining 2014.Google ScholarGoogle Scholar
  31. Mehmet Cenk Sezgin, Bilge Günsel, and Güneş Karabulut Kurt. 2011. A novel perceptual feature set for audio emotion recognition. In 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 780–785.Google ScholarGoogle ScholarCross RefCross Ref
  32. Mandeep Singh and Yuan Fang. 2020. Emotion recognition in audio and video using deep neural networks. arXiv preprint arXiv:2006.08129(2020).Google ScholarGoogle Scholar
  33. Oren Wright. 2019. Emotion Recognition from Voice in the Wild. Technical Report. CARNEGIE-MELLON UNIV PITTSBURGH PA PITTSBURGH United States.Google ScholarGoogle Scholar
  34. Takashi Yamauchi, Anton Leontyev, and Moein Razavi. 2019. Assessing emotion by mouse-cursor tracking: Theoretical and empirical rationales. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 89–95.Google ScholarGoogle ScholarCross RefCross Ref
  35. Takashi Yamauchi and Kunchen Xiao. 2018. Reading emotion from mouse cursor motions: Affective computing approach. Cognitive science 42, 3 (2018), 771–819.Google ScholarGoogle Scholar
  36. Philippe Zimmermann, Sissel Guttormsen, Brigitta Danuser, and Patrick Gomez. 2003. Affective computing—a rationale for measuring mood with mouse and keyboard. International journal of occupational safety and ergonomics 9, 4(2003), 539–551.Google ScholarGoogle Scholar

Index Terms

  1. Capturing and Predicting User Frustration to Support a Smart Operating System

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        IUI '23 Companion: Companion Proceedings of the 28th International Conference on Intelligent User Interfaces
        March 2023
        266 pages
        ISBN:9798400701078
        DOI:10.1145/3581754

        Copyright © 2023 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 March 2023

        Check for updates

        Qualifiers

        • poster
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate746of2,811submissions,27%
      • Article Metrics

        • Downloads (Last 12 months)76
        • Downloads (Last 6 weeks)3

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format