Skip to main content

Advertisement

Log in

Emotion recognition models for companion robots

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

There has been a steep increase in the use of machine learning for various healthcare applications like the diagnosis of diseases, drug discovery, medical image analysis, etc. Machine learning solutions are proven to be more efficient and less time-consuming than conventional approaches. In this paper, we leverage the advantages of machine learning models to enable a humanoid robot to assist mental health patients. Facial expression and human voice are some of the most demonstrative ways to analyze human emotions, especially for the mentally challenged. We carry out this assistance by constantly monitoring the patient’s voice (or audio) and facial expressions to predict human emotions. To implement the model of audio monitoring, we train three different machine learning and deep learning models to compare and choose the better model. Similarly, for facial recognition, we train a deep learning model using a specific dataset to predict facial expressions from the video captured in real time. We then integrate the better-performing machine learning models into a web interface for demonstration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Hernandez-Cruz N, Garcia-Constantino M (2020) Prototypical system to detect anxiety manifestations by acoustic patterns in patients with dementia. EAI Endorsed Trans Pervasive Health Technol 5(19):e5

    Google Scholar 

  2. Mordoch E, Osterreicher A, Guse L, Roger K, Thompson G (2013) Use of social commitment robots in the care of elderly people with dementia: a literature review. Maturitas 74(1):14–20

    Article  Google Scholar 

  3. Samuel (2016) Meet Zenbo, the Asus robot that costs no more than a smartphone. The Guardian https://www.theguardian.com/technology/2016/may/31/asus-zenbo-robot-price-smartphone-voice-face

  4. Vargas Martin M, Perez Valle E, Horsburgh S (2020) Artificial empathy for clinical companion robots with privacy-by-design. MobiHealth, pp 351–361

  5. Busso C, Bulut M, Lee C-C, Kazemzadeh A, Mower E, Kim S, Chang JN, Lee S, Narayanan SS (2008) IEMOCAP: interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335–359

    Article  Google Scholar 

  6. Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39(6):1161

    Article  Google Scholar 

  7. Wolfram Research (2018) FER-2013, Wolfram Data Repository

  8. Schlosberg H (1952) The description of facial expressions in terms of two dimensions. J Exp Psychol 44(4):229–237

    Article  Google Scholar 

  9. Sahu G. Multimodal speech emotion recognition and ambiguity resolution. arXiv preprint arXiv:1904.06022

  10. Burkhardt F, Sendlmeier WF (2000) Verification of acoustical correlates of emotional speech using formant-synthesis. In: ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion

  11. Sondhi M (1968) New methods of pitch extraction. IEEE Trans Audio Electroacoust 16(2):262–266

    Article  Google Scholar 

  12. McFee B, Raffel C, Liang D, Ellis DPW, McVicar M, Battenberg E, Nieto O (2015) Librosa: audio and music signal analysis in Python. In: Proceedings of the 14th Python in Science Conference, vol 8

  13. Teager HM, Teager SM (1990) Evidence for nonlinear sound production mechanisms in the vocal tract. Speech production and speech modelling. Springer, Berlin, pp 241–261

    Google Scholar 

  14. Zhou G, Hansen JHL, Kaiser JF (2001) Nonlinear feature based classification of speech under stress. IEEE Trans Speechand Audio Process 9(3):201–216

    Article  Google Scholar 

  15. Ramos J (2003) Using TF-IDF to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning, vol 242. Citeseer, pp 29–48

  16. Buitinck L, Louppe G, Blondel M, Pedregosa F, Mueller A, Grisel O, Niculae V, Prettenhofer P, Gramfort A, Grobler J, Layton R, VanderPlas J, Joly A, Holt B, Varoquaux G (2013) API design for machine learning software: experiences from the scikit-learn project. In: Languages for Data Mining and Machine Learning, ECML PKDD Workshop, pp 108–122

  17. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol 1. IEEE, pp I–I

  18. Fayek HM, Lech M, Cavedon L (2017) Evaluating deep learning architectures for speech emotion recognition. Neural Netw 92:60–68. https://doi.org/10.1016/j.neunet.2017.02.013

    Article  Google Scholar 

  19. Hosmer DW Jr, Lemeshow S, Sturdivant RX (2013) Applied logistic regression, vol 398. Wiley, London

    Book  Google Scholar 

  20. Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system, pp 785–794

  21. Breiman L (2001) Random forests. Machine Learn 45(1):5–32

    Article  Google Scholar 

  22. Chen T, Guestrin C (2016) XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD’16), ACM, New York, pp 785–794. https://doi.org/10.1145/2939672.2939785

  23. Kiranyaz S, Avci O, Abdeljaber O, Ince T, Gabbouj M, Inman DJ (2021) 1D convolutional neural networks and applications: a survey. Mech Syst Signal Process 151:107398

    Article  Google Scholar 

  24. Chollet F, and others (2015) Keras, GitHub. https://github.com/fchollet/keras Retrieved from

  25. Zhou P, Shi W, Tian J, Qi Z, Li B, Hao H, Xu B (2016) Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (volume 2: Short papers), pp 207–212

  26. Ruiz-Garcia A, Elshaw M, Altahhan A, Palade V (2016) Deep learning for emotion recognition in faces. In: International Conference on Artificial Neural Networks, pp 38–46

  27. Zhu X, Li L, Zhang W, Rao T, Xu M, Huang Q, Xu D (2017) Dependency exploitation: a unified CNN-RNN approach for visual emotion recognition. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp 3595–3601

  28. Talegaonkar I, Joshi K, Valunj S, Kohok R, Kulkarni A (2019) Real time facial expression recognition using deep learning. In: Proceedings of International Conference on Communication and Information Processing (ICCIP)

  29. Ketkar N (2017) Introduction to Keras. Deep learning with Python. Springer, Berlin, pp 97–111

    Google Scholar 

  30. Grinberg M (2018) Flask web development: developing web applications with Python. O’Reilly Media, Inc

  31. Choudhari S, Ghare P, Gwalani N, Agarkar P (2017) Facial expression recognition project. Int J Sci Res Dev 5(10):473–475

    Google Scholar 

  32. Liu W, Zheng W-L, Lu B-L (2016) Emotion recognition using multimodal deep learning. In: Neural Information Processing, ICONIP 2016, Lecture Notes in Computer Science, vol 9948. Springer, Cham, pp 521–529. https://doi.org/10.1007/978-3-319-46672-9_58

  33. Domínguez-Jiménez JA, Campo-Landines KC, Martínez-Santos JC, Delahoz EJ, Contreras-Ortiz SH (2020) A machine learning model for emotion recognition from physiological signals. Biomed Signal Process Control. https://doi.org/10.1016/j.bspc.2019.101646

    Article  Google Scholar 

  34. Gupta V, Chopda MD, Pachori RB (2019) Cross-subject emotion recognition using flexible analytic wavelet transform from EEG signals. IEEE Sens J 19(6):2266–2274

    Article  Google Scholar 

  35. Kong T, Shao J, Hu J, Yang X, Yang S, Malekian R (2021) EEG-based emotion recognition using an improved weighted horizontal visibility graph. Sensors. https://doi.org/10.3390/s21051870

    Article  Google Scholar 

  36. Tuncer T, Dogan S, Baygin M, Acharya UR (2022) Tetromino pattern based accurate EEG emotion classification model. Artif Intell Med. https://doi.org/10.1016/j.artmed.2021.102210

    Article  Google Scholar 

  37. Martin MV, Cho V, Aversano G (2016) Detection of subconscious face recognition using consumer-grade brain-computer interfaces. ACM Trans Appl Percept (TAP) 14(1):1–20

    Article  Google Scholar 

  38. Mustakim N, Hossain N, Rahman MM, Islam N, Sayem ZH, Mamun MA (2019) Face recognition system based on raspberry Pi platform. In: 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT). IEEE, pp 1–4

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ritvik Nimmagadda.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Ritvik Nimmagadda and Kritika Arora these authors Worked on this project as research interns at the Ontario Tech University, supported by MITACS Globalink Research Internship.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nimmagadda, R., Arora, K. & Martin, M.V. Emotion recognition models for companion robots. J Supercomput 78, 13710–13727 (2022). https://doi.org/10.1007/s11227-022-04416-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04416-4

Keywords

Navigation