Skip to main content
Log in

Affective-pose gait: perceiving emotions from gaits with body pose and human affective prior knowledge

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

As a non-verbal biometric method that can be perceived emotion at a distance, gait has broad applications in affective computing. To perceive emotions from gaits, existing methods usually use velocity, acceleration and area to describe human affective features, which often fail to learn the features of body pose and lose representativeness comprehensively. In this paper, we design the fine-grained affective features based on prior knowledge and present a novel perspective to treat the fine-grained affective features of the gait as the fusion of spatial-temporal features. Following this perspective, we use the ST-GCN to build the pose features and utilize the CNN to learn the affective features. By integrating, we proposed the Affective-Pose Gait network, which fusion the pose and affective feature to analyze the emotions in gaits. The experimental results on the Emotion-Gait dataset prove that Affective-Pose Gait achieves 85.2% in terms of accuracy and outperforms state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Bhattacharya U, Mittal T, Chandra R, Randhavane T, Bera A, Manocha D (2020) Step: spatial temporal graph convolutional networks for emotion perception from gaits. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, no 02, pp 1342–1350

  2. Bhattacharya U, Roncal C, Mittal T, Chandra R, Kapsaskis K, Gray K, Manocha D (2020) Take an emotion walk: perceiving emotions from gaits using hierarchical attention pooling and affective mapping. In: European Conference on Computer Vision, pp 145–163

  3. Brien O, Katherine AL, Vanessa R Z, Priya S, Lee AL (2020) Voice-controlled intelligent personal assistants to support aging in place. J Am Geriatr Soc 68(1):176–79

    Article  Google Scholar 

  4. Chao HQ, Kun W, Yi WH, Jun PZ, Jian FF (2021) Gaitset: cross-view gait recognition through utilizing gait as a deep set. IEEE Trans Pattern Anal Mach Intell 1(1):1–1

    Article  Google Scholar 

  5. Crenn A, Meyer A, Khan RA, Konik H, Bouakaz S (2017) Toward an efficient body expression recognition based on the synthesis of a neutral movement. In: Proceedings of the 19th ACM international conference on multimodal interaction, pp 15–22

  6. Crenn A, Rizwan AK, Alexandre M, Saida B (2016) Body expression recognition from animated 3D skeleton. In: 2016 International Conference on 3D Imaging (IC3D), pp 1–7

  7. Dabral R, Mundhada A, Kusupati U (2018) Learning 3d human pose from structure and motion. In: Proceedings of the European Conference on Computer Vision (ECCV), pp 668–683

  8. Daoudi M, Berretti S, Pala P, Delevoye Y, Del Bimbo A (2017) Emotion recognition by body movement representation on the manifold of symmetric positive definite matrices. In: International conference on image analysis and processing. Springer, Cham, pp 550–560

  9. David P, Leslie G (2021) Ungerleider.: evidence for a third visual pathway specialized for social perception. Trends Cogn Sci 25(2):100–110

    Article  Google Scholar 

  10. Dewaele JM, Moxsom TP (2020) Visual cues and perception of emotional intensity among l1 and lx users of english. Int J Multiling 17(4):499–515

    Article  Google Scholar 

  11. Eliza BM, Peter HR (2021) Animal models of human mood. Neurosci Biobehav Rev 120:574–582

    Article  Google Scholar 

  12. Fan C, Yun J, Chun SC, Xu L, Sai HH, Jian NC, Yong ZH, Qing L, Zhi QH (2020) Gaitpart: temporal part-based model for gait recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14225–14233

  13. Fatemeh N, Ciprian AC, Dorota K, Tomasz S, Sergio E, Gholamreza A (2018) Survey on emotional body gesture recognition. IEEE Trans Affect Comput 12(3):505–523

    Google Scholar 

  14. Ferdous A, Marina LG (2019) Two-layer feature selection algorithm for recognizing human emotions from 3D motion analysis. Advan Comput Graphics, pp 53–67

  15. François B, Justin C, Philippe S (2021) Optimal estimation of the centroidal dynamics of legged robots. 2021 IEEE international conference on robotics and automation (ICRA) pp 4912–4918

  16. Gao S, Yun J, Zhao Y, Liu L (2022) Gait-d: skeleton-based gait feature decomposition for gait recognition. IET Comput Vis 16(2):111–125

    Article  Google Scholar 

  17. Gedik E, Cabrera QL, Martella C, Englebienne G, Hung H (2021) Towards analyzing and predicting the experience of live performances with wearable sensing. IEEE Trans Affect Comput 12(1):269–276

    Article  Google Scholar 

  18. Hamza E, Shaun JC, Paul JR (2022) Affectivetda: using topological data analysis to improve analysis and explainability in affective computing. IEEE Trans Vis Comput Graph 28(6):769–779

    Google Scholar 

  19. Jia QS, Chaoran L, Carlos TI, Hiroshi I (2021) Skeleton-based emotion recognition based on two-stream self-attention enhanced spatial-temporal graph convolutional network. Sensors (Basel Switzerland) 21(5):1–16

    Google Scholar 

  20. Karg M, Kühnlenz K, Buss M (2010) Recognition of affect based on gait patterns. IEEE Trans Syst Man Cybern 40(4):1050–1061

    Article  Google Scholar 

  21. Li BB, Chang YZ, Shun L, Ting SZ (2018) Identifying emotions from non-contact gaits information based on microsoft kinects. IEEE Trans Affect Comput 9(4):585–91

    Article  Google Scholar 

  22. Li HJ, Ming NW, Cian HL (2018) The 3d school guide system with affective computing of posture emotion expression. 2018 IEEE International Conference on Robotics and Automation (ICRA)

  23. Li M, Yan JH, Zhao G, Ma YL (2021) Mechanically assisted neuro rehabilitation: a novel six-bar linkage mechanism for gait rehabilitation. IEEE Trans Neural Syst Rehab Eng 29:985–992

    Article  Google Scholar 

  24. Li C, Zhong Q, Xie D, Pu S (2018) Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. International Joint Conference on Artificial Intelligence (IJCAI)

  25. Liu W, Wen Y, Raj B, Singh R, Weller A (2022) Sphereface revived: Unifying hyperspherical face recognition. IEEE Trans Pattern Anal Mach Intell 3(12):173–189

    Google Scholar 

  26. Louisa M, Hannah CA, Karin SP (2018) Behavioural evidence for distinct mechanisms related to global and biological motion perception. Vis Res 142(8):58–64

    Google Scholar 

  27. Lv F, Chen X, Huang Y, Duan GL (2021) Lin progressive modality reinforcement for human multimodal emotion recognition from unaligned multimodal sequences. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2554–2562

  28. Martin M, Alina R, Monica H, Matthias H, Simon R, Michael V, Rainer S (2019) DriveAct: a multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp 2801–10

  29. Mittal T, Aniket B, Dinesh M (2021) Multimodal and context-aware emotion perception model with multiplicative fusion. IEEE MultiMedia 28(2):67–75

    Article  Google Scholar 

  30. Muhammad AH, Qaiser R, Muhammad Z, Shahzad M, Muhammad MF (2022) Motion reveal emotions: identifying emotions from human walk using chest mounted smartphone. IEEE Sensors J 20(5):13511–13522

    Google Scholar 

  31. Narayanan M, Dorbala M, Bera A (2020) Proxemo: Gait-based emotion learning and multi-view proxemic fusion for socially aware robot navigation. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020 IEEE

  32. Patrick TH, Dustin LC (2020) Effect of continuous, mechanically passive, anti-gravity assistance on kinematics and muscle activity during dynamic shoulder elevation. J Biomechanics 103(4):109685

    Google Scholar 

  33. Peng FZ, Cui lL, Jun lX, Wen jZ, Jian RX, Nan NZ (2019) View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Trans Pattern Anal Mach Intell 41(5):1963–1978

    Google Scholar 

  34. Peng S, Wen mZ (2020) Feature selection based transfer subspace learning for speech emotion recognition. IEEE Trans Affect Comput 11(6):373–382

    Google Scholar 

  35. Peter P, Chan K, Chao YC, Hussein A, Lobo HTL, Nathan C, Roy TC (2021) Gait difference between children aged 9 to 12 with and without potential depressive mood. Gait Posture 91(5):126–130

    Google Scholar 

  36. Piana S, Staglianò A, Camurri A, Odone F (2021) An audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp 3638–3644

  37. Ramprakash S, Aleix M (2021) Martinez.: cross-cultural and cultural-specific production and perception of facial expressions of emotion in the wild. IEEE Trans Affect Comput 12(1):707–721

    Google Scholar 

  38. Randhavane T, Bera A, Kubin E, Gray K, Manocha D (2021) Modeling data-driven dominance traits for virtual characters using gait analysis. IEEE Trans Vis Comput Graph 27(6):2967–2979

    Article  Google Scholar 

  39. Randhavane T, Bhattacharya U, Kapsaskis K, Gray K, Bera A, Manocha D (2019) Identifying emotions from walking using affective and deep features. arXiv:1906.11884

  40. Randhavane T, Bhattacharya U, Kapsaskis K, Gray K, Bera A, Manocha D (2019) Learning perceived emotion using affective and deep features for mental health applications. In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp 395–399

  41. Rivas J, Orihuela EF, Palafox L, Bianchi BN, Lara MC, Hernández FJ, Sucar LE (2020) Unobtrusive inference of affective states in virtual rehabilitation from upper limb motions: a feasibility study. IEEE Trans Affect Comput 11(3):470–481

    Article  Google Scholar 

  42. Romeo L, Andrea C, Lucia P, Nadia B, Massimiliano P (2022) Multiple instance learning for emotion recognition using physiological signals. IEEE Trans Affect Comput 13(1):389–407

    Article  Google Scholar 

  43. Rosanna E, Daniel Y, Iroise D (2020) Supplemental material for association between action kinematics and emotion perception across adolescence. J Exp Psychol Hum Percept Perform 46(7):657–666

    Article  Google Scholar 

  44. Santhoshkumara R, Kalaiselvi M (2019) Geetha.: deep learning approach for emotion recognition from human body movements with feedforward deep convolution neural networks, International conference on pervasive computing advances and applications, pp 158–165

  45. Sarah KD, Michele M, Melanie AD, Pamela sQ (2020) Ability emotional intelligence: what about recognition of emotion in voices? Personal Individ Differ 160(109938):0191–8869

    Google Scholar 

  46. Sheng W, Li X (2021) Emulti-task learning for gait-based identity recognition and emotion recognition using attention enhanced temporal graph convolutional network. Pattern Recogn 11(4):107868

    Article  Google Scholar 

  47. Shi YY, Xin dL (2021) Gait-based emotion recognition using spatial temporal graph convolutional networks. 2021 international conference on computer information science and artificial intelligence (CISAI) pp 190–193

  48. Shun L, Li QC, Chang YZ, Bao bL, Nan Z, Ting SZ (2016) Emotion recognition using kinect motion capture data of human gaits. PeerJ, vol 4

  49. Talbot B, Dayoub F, Corke P, Wyeth G (2021) Robot navigation in unseen spaces using an abstract map. IEEE Trans Cogn Develop Syst 13 (4):791–805

    Article  Google Scholar 

  50. Teena H, Dominik S, Johannes W, Katharina W, Miriam K, Stefan L, Jens UG, Ute S (2021) Automatic detection of pain from facial expressions: a survey. IEEE Trans Pattern Anal Mach Intell 43(1):1815–1831

    Google Scholar 

  51. Thomas T, Eric G, Alessandro LK (2021) Continuous emotion recognition with spatiotemporal convolutional neural networks. ArXiv

  52. Trisha M, Pooja G, Uttaran B, Rohan C, Aniket B, Dinesh M (2020) Emoticon: context-aware multimodal emotion recognition using Frege’s principle. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 14222–14231

  53. Uttaran B, Elizabeth C, Nicholas R, Dinesh M (2021) Speech2affectivegestures: synthesizing co-speech gestures with generative adversarial affective expression learning proceedings of the 29th ACM international conference on multimedia

  54. Uttaran B, Nicholas R, Pooja G, Niall LW, Trisha M, Anike tB, Dinesh M (2020) Generating emotive gaits for virtual agents using affect-based autoregression. 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 24–35

  55. Venture G, Kadone H, Zhang T, Grèzes J, Berthoz A, Hicheur H (2014) Recognizing emotions conveyed by human gait. Int J Social Robot 6(4):621–632

    Article  Google Scholar 

  56. Wei jS, Xiao yL, Xin dL (2021) MLDT Multi-task Learning with Denoising Transformer for gait identity and emotion recognition. 2021 4th Artificial Intelligence and Cloud Computing Conference (AICCC), pp 41–52

  57. Wolmer B, Claudio B, Bimbo A (2020) SAUtomatic Interest Recognition from Posture and Behaviour. Proceedings of the 28th ACM international conference on multimedia

  58. Woojin K, Intaek J, Daeho L, JinHyuk H (2021) Styling words: a simple and natural way to increase variability in training data collection for gesture recognition. Proceedings of the 2021 CHI conference on human factors in computing systems

  59. Xiao S, Zheng MP, Zhang C, Li GQ, Tao JH (2021) Design and analysis of a human–machine interaction system for researching human’s dynamic emotion. IEEE Trans Syst Man Cybern Syst 51(10):6111–6121

    Article  Google Scholar 

  60. Xu N, Wen jM, Peng Zh, Daniel W (2020) Mda: Multimodal data augmentation framework for boosting performance on image-text sentiment/emotion classification tasks. IEEE Intell Syst 9(6):1–1

    Google Scholar 

  61. Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Thirty-second AAAI conference on artificial intelligence

  62. Yi SS, Zhang Z, Cai fS, Liang W (2021) Richly activated graph convolutional network for robust skeleton-based action recognition. IEEE Trans Circuits Syst Video Technol 31(8):1915–1925

    Google Scholar 

  63. Yi SS, Zhang Z, Liang W (2019) Richly activated graph convolutional network for action recognition with incomplete skeletons. 2019 IEEE international conference on image processing (ICIP), 1–5

  64. Yibo H, Hong qW, Linbo Q, Rulong J, Lei MX (2021) Emotion Recognition Based on Body and Context Fusion in the Wild. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp 3602–3610

  65. Zhang K, Li YQ, Wang JY, Cambria E, Li X (2022) Real-time video emotion recognition based on reinforcement learning and domain knowledge. IEEE Trans Circuits Syst Video Technol 32(3):1034–1047

    Article  Google Scholar 

  66. Zhang XX, Yang XG, Zhang GW, Li FG, Yu H (2021) Crowd emotion evaluation based on fuzzy inference of arousal and valence. Neurocomputing 445(2):194–205

    Article  Google Scholar 

  67. Zheng WL, Wei L, Yi FL, Bao LL, Andrzej C (2019) Emotionmeter: a multimodal framework for recognizing human emotions. IEEE Trans Syst Man Cybern 49(3):1110–22

    Google Scholar 

  68. Zheng F, Zhen L, Ting tL, Chih CH, Jiang jX, Guang jF (2022) Facial expression gan for voice-driven face generation. Vis Comput 38 (5):1151–1164

    Google Scholar 

  69. Zhou XY, Huang QX, Sun X, Xue X, Wei YC (2017) Towards 3D human pose estimation in the wild: a weakly-supervised approach. 2017 IEEE international conference on computer vision (ICCV), pp 398–407

  70. Zhuang Y, Lin L, Tong RF, Liu JQ, Iwamoto Y, Chen YW (2021) G-GCSN: global graph convolution shrinkage network for emotion perception from gait. 2021 asian conference on computer vision (ACCV), pp 46–57

Download references

Acknowledgements

This work was partially sponsored by Ningbo Science Technology Plan projects (Grant No. 2021S091, 2022Z077, 2020Z082) and Ningbo University scientific Research Innovation Fund Project Grant No. IF2022122, No. IF2022116, No. IF2022132, No. IF2022133.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liu Zhen.

Ethics declarations

Competing interests

The authors have no competing interests to declare that are relevant to the content of this article. The authors have no financial or proprietary interests in any material discussed in this article. Authors are responsible for correctness of the statements provided in the manuscript. See also Authorship Principles.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

YuMeng, Z., Zhen, L., TingTing, L. et al. Affective-pose gait: perceiving emotions from gaits with body pose and human affective prior knowledge. Multimed Tools Appl 83, 5327–5350 (2024). https://doi.org/10.1007/s11042-023-15162-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15162-x

Keywords

Navigation