Skip to main content

SELFI: Evaluation of Techniques to Reduce Self-report Fatigue by Using Facial Expression of Emotion

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14142))

Included in the following conference series:

  • 660 Accesses

Abstract

This paper presents the SELFI framework which uses information from a range of indirect measures to reduce the burden on users of context-sensitive apps in the need to self-report their mental state. In this framework, we implement multiple combinations of facial emotion recognition tools (Amazon Rekognition, Google Vision, Microsoft Face), and feature reduction approaches to demonstrate the versatility of the framework in facial expression based emotion estimation. The evaluation of the framework involving 20 participants in a 28-week in-the-wild study reveals that the proposed framework can estimate emotion accurately using facial image (\(83\%\) and \(81\%\) macro-F1 for valence and arousal, respectively), with an average reduction of \(10\%\) self-report burden. Moreover, we propose a solution to detect the performance drop of the model developed by SELFI, during runtime without the use of ground truth emotion, and we achieve accuracy improvements of 14%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In https://anonymous.4open.science/r/Image_collection_Upload_Dropbox-0565/ we provide the implementation of the data collection apparatus.

  2. 2.

    The IRB approval number IIT/SRIC/SAO/2017.

  3. 3.

    We advised the participants to rely on the Digital Wellbeing and Parental Control tool to respond to the smartphone usage related questionnaire.

  4. 4.

    In https://anonymous.4open.science/r/SELFI-77A3/ we provide the implementation of the SELFI framework, with a toy dataset.

References

  1. Face API - facial recognition software — Microsoft Azure (2021). http://azure.microsoft.com/en-in/overview/what-is-azure/. Accessed 29 Dec 2021

  2. Vision API - image content analysis — Google Cloud (2021). http://cloud.google.com/vision/. Accessed 29 Dec 2021

  3. Agrafioti, F., Hatzinakos, D., Anderson, A.K.: ECG pattern analysis for emotion detection. IEEE Trans. Affect. Comput. 3(1), 102–115 (2011)

    Article  Google Scholar 

  4. Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)

    Article  Google Scholar 

  5. Arshad, R., Baig, M.A., Tariq, M., Shahid, S.: Acceptability of persuasive prompts to induce behavioral change in people suffering from depression. In: Lamas, D., Loizides, F., Nacke, L., Petrie, H., Winckler, M., Zaphiris, P. (eds.) INTERACT 2019. LNCS, vol. 11749, pp. 120–139. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29390-1_7

    Chapter  Google Scholar 

  6. Asim, Y., Azam, M.A., Ehatisham-ul Haq, M., Naeem, U., Khalid, A.: Context-aware human activity recognition (CAHAR) in-the-wild using smartphone accelerometer. IEEE Sens. J. 20(8), 4361–4371 (2020)

    Article  Google Scholar 

  7. Bouhlel, N., Dziri, A.: Kullback-Leibler divergence between multivariate generalized gaussian distributions. IEEE Signal Process. Lett. 26(7), 1021–1025 (2019). https://doi.org/10.1109/LSP.2019.2915000

    Article  Google Scholar 

  8. Cao, L., Wang, Y., Zhang, B., Jin, Q., Vasilakos, A.V.: GCHAR: an efficient group-based context-aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 118, 67–80 (2018)

    Article  Google Scholar 

  9. Chitkara, S., Gothoskar, N., Harish, S., Hong, J.I., Agarwal, Y.: Does this app really need my location? Context-aware privacy management for smartphones. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 1(3), 1–22 (2017)

    Article  Google Scholar 

  10. Diamantini, C., Mircoli, A., Potena, D., Storti, E.: Automatic annotation of corpora for emotion recognition through facial expressions analysis. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5650–5657. IEEE (2021)

    Google Scholar 

  11. Frijda, N.H.: Moods, emotion episodes, and emotions (1993)

    Google Scholar 

  12. Furey, E., Blue, J.: Alexa, emotions, privacy and GDPR. In: Proceedings of the 32nd International BCS Human Computer Interaction Conference, vol. 32, pp. 1–5 (2018)

    Google Scholar 

  13. Gund, M., Bharadwaj, A.R., Nwogu, I.: Interpretable emotion classification using temporal convolutional models. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 6367–6374. IEEE (2021)

    Google Scholar 

  14. H2O.ai: H2O: Scalable Machine Learning Platform, version 3.30.0.6 (2020). http://github.com/h2oai/h2o-3

  15. Huang, Y.N., Zhao, S., Rivera, M.L., Hong, J.I., Kraut, R.E.: Predicting well-being using short ecological momentary audio recordings. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–7 (2021)

    Google Scholar 

  16. Hume, D.: Emotions and moods. Organizational behavior (258–297) (2012)

    Google Scholar 

  17. Khalil, R.A., Jones, E., Babar, M.I., Jan, T., Zafar, M.H., Alhussain, T.: Speech emotion recognition using deep learning techniques: a review. IEEE Access 7, 117327–117345 (2019)

    Article  Google Scholar 

  18. Khwaja, M., Matic, A.: Personality is revealed during weekends: towards data minimisation for smartphone based personality classification. In: Lamas, D., Loizides, F., Nacke, L., Petrie, H., Winckler, M., Zaphiris, P. (eds.) INTERACT 2019. LNCS, vol. 11748, pp. 551–560. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29387-1_32

    Chapter  Google Scholar 

  19. Ko, B.C.: A brief review of facial emotion recognition based on visual information. Sensors 18(2), 401 (2018)

    Google Scholar 

  20. Kornbrot, D.: Point biserial correlation. Wiley StatsRef: Statistics Reference Online (2014)

    Google Scholar 

  21. Larson, R., Csikszentmihalyi, M.: The experience sampling method. In: Flow and the Foundations of Positive Psychology, pp. 21–34. Springer, Dordrecht (2014). https://doi.org/10.1007/978-94-017-9088-8_2

    Chapter  Google Scholar 

  22. Lim, J., et al.: Assessing sleep quality using mobile EMAs: opportunities, practical consideration, and challenges. IEEE Access 10, 2063–2076 (2022)

    Article  Google Scholar 

  23. Liu, W., Zhang, L., Tao, D., Cheng, J.: Reinforcement online learning for emotion prediction by using physiological signals. Pattern Recognit. Lett. 107, 123–130 (2018). https://doi.org/10.1016/j.patrec.2017.06.004. www.sciencedirect.com/science/article/pii/S0167865517302003

  24. Mandi, S., Ghosh, S., De, P., Mitra, B.: Emotion detection from smartphone keyboard interactions: role of temporal vs spectral features. In: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, pp. 677–680 (2022)

    Google Scholar 

  25. Mavs: ATOM: A Python package for fast exploration of machine learning pipelines (2019). aTOM version 2.0.3. www.tvdboom.github.io/ATOM/

  26. Mehrotra, A., Vermeulen, J., Pejovic, V., Musolesi, M.: Ask, but don’t interrupt: the case for interruptibility-aware mobile experience sampling. In: Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, pp. 723–732 (2015)

    Google Scholar 

  27. Pejovic, V., Musolesi, M.: Interruptme: designing intelligent prompting mechanisms for pervasive applications. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 897–908 (2014)

    Google Scholar 

  28. Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(3), 715–734 (2005)

    Article  Google Scholar 

  29. Qi, W., Su, H., Aliverti, A.: A smartphone-based adaptive recognition and real-time monitoring system for human activities. IEEE Trans. Hum.-Mach. Syst. 50(5), 414–423 (2020)

    Article  Google Scholar 

  30. Rabbi, M., Li, K., Yan, H.Y., Hall, K., Klasnja, P., Murphy, S.: ReVibe: a context-assisted evening recall approach to improve self-report adherence. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 3(4), 1–27 (2019)

    Article  Google Scholar 

  31. Rasmussen, C.E.: Gaussian processes in machine learning. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) ML -2003. LNCS (LNAI), vol. 3176, pp. 63–71. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28650-9_4

    Chapter  Google Scholar 

  32. Roth, V., Steinhage, V.: Nonlinear discriminant analysis using kernel functions. In: NIPS, vol. 12, pp. 568–574 (1999)

    Google Scholar 

  33. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)

    Article  Google Scholar 

  34. Sarker, I.H., Abushark, Y.B., Khan, A.I.: Contextpca: predicting context-aware smartphone apps usage based on machine learning techniques. Symmetry 12(4), 499 (2020)

    Article  Google Scholar 

  35. Schmidt, P., Reiss, A., Dürichen, R., Laerhoven, K.V.: Wearable-based affect recognition-a review. Sensors 19(19), 4079 (2019)

    Article  Google Scholar 

  36. Schmidt, P., Reiss, A., Dürichen, R., Van Laerhoven, K.: Labelling affective states “in the wild” practical guidelines and lessons learned. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 654–659 (2018)

    Google Scholar 

  37. Schölkopf, B., Smola, A., Müller, K.-R.: Kernel principal component analysis. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0020217

    Chapter  Google Scholar 

  38. Sedgwick, P.: Snowball sampling. BMJ 347 (2013)

    Google Scholar 

  39. Sepas-Moghaddam, A., Etemad, A., Correia, P.L., Pereira, F.: A deep framework for facial emotion recognition using light field images. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 1–7 (2019). https://doi.org/10.1109/ACII.2019.8925445

  40. Shahriar, S., Kim, Y.: Audio-visual emotion forecasting: characterizing and predicting future emotion using deep learning. In: 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), pp. 1–7. IEEE (2019)

    Google Scholar 

  41. Shu, L., et al.: A review of emotion recognition using physiological signals. Sensors 18(7), 2074 (2018)

    Article  Google Scholar 

  42. Suhara, Y., Xu, Y., Pentland, A.: Deepmood: forecasting depressed mood based on self-reported histories via recurrent neural networks. In: Proceedings of the 26th International Conference on World Wide Web, pp. 715–724 (2017)

    Google Scholar 

  43. Tashtoush, Y.M., Orabi, D.A.A.A.: Tweets emotion prediction by using fuzzy logic system. In: 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), pp. 83–90. IEEE (2019)

    Google Scholar 

  44. Varma, S., Simon, R.: Bias in error estimation when using cross-validation for model selection. BMC Bioinform. 7(1), 1–8 (2006)

    Article  Google Scholar 

  45. Weiss, H.M., Cropanzano, R.: Affective events theory. Res. Organ. Behav. 18(1), 1–74 (1996)

    Google Scholar 

  46. Wikipedia contributors: Amazon rekognition – Wikipedia, the free encyclopedia (2021). www.en.wikipedia.org/w/index.php?title=Amazon_Rekognition &oldid=1024901190. Accessed 29 Dec 2021

  47. Zhang, X., Li, W., Chen, X., Lu, S.: Moodexplorer: towards compound emotion detection via smartphone sensing. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 1(4), 1–30 (2018)

    Google Scholar 

  48. Zhang, Z., Wu, B., Schuller, B.: Attention-augmented end-to-end multi-task learning for emotion prediction from speech. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6705–6709. IEEE (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salma Mandi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mandi, S., Ghosh, S., De, P., Mitra, B. (2023). SELFI: Evaluation of Techniques to Reduce Self-report Fatigue by Using Facial Expression of Emotion. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14142. Springer, Cham. https://doi.org/10.1007/978-3-031-42280-5_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42280-5_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42279-9

  • Online ISBN: 978-3-031-42280-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics