Skip to main content
Log in

Skeleton-based Tai Chi action segmentation using trajectory primitives and content

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Recognizing and analyzing human action is an important problem in many applications. Most studies focus on single motions, but human activity usually appears as a complex action sequence. The attendant problem is that segmenting and labeling action data manually is expensive and time-consuming, especially motions in professional fields. In this paper, we introduce Tai Chi as the background of action segmentation and propose a supervised method for Tai Chi action sequence segmentation based on trajectory primitives and geometric features. The concept of trajectory primitives is inspired by how humans recognize actions based on action fragments. They can be learned by unsupervised clustering through the self-organizing feature map. Also, we extract geometric features based on the content of motion. The work contains an experimental analysis of the proposed method on the Tai Chi dataset. In the experiment, we argued various parameters and considered the abnormal sequences. Experimental results demonstrate that our method achieves state-of-the-art performance. To allow future use by interested researchers, we release the Tai Chi dataset used in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The Tai Chi dataset employed during the current study is available at https://hit605.org/projects/taichi-data/.

Notes

  1. Tai Chi dataset is available at https://hit605.org/projects/taichi-data/.

References

  1. Starke S, Zhao Y, Komura T, Zaman K (2020) Local motion phases for learning multi-contact character movements. ACM Trans Graph (TOG) 39(4):1–54

    Article  Google Scholar 

  2. Yan Y, Omisore OM, Xue Y-C, Li H-H, Liu Q-H, Nie Z-D, Fan J, Wang L (2020) Classification of neurodegenerative diseases via topological motion analysis-a comparison study for multiple gait fluctuations. IEEE Access 8:96363–96377

    Article  Google Scholar 

  3. Kılıboz NÇ, Güdükbay U (2015) A hand gesture recognition technique for human-computer interaction. J Vis Commun Image Represent 28:97–104

    Article  Google Scholar 

  4. Ma X, Yuan L, Wen R, Wang Q (2020) Sign language recognition based on concept learning. In: 2020 IEEE International instrumentation and measurement technology conference (I2MTC), IEEE, pp. 1–6

  5. Liu S, Zhang A, Li Y, Zhou J, Xu L, Dong Z, Zhang R (2021) Temporal segmentation of fine-grained semantic action: A motion-centered figure skating dataset. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35, pp. 2163–2171

  6. Bhavanasi G, Werthen-Brabants L, Dhaene T, Couckuyt I (2022) Patient activity recognition using radar sensors and machine learning. Neural Comput Appl 34:16033–16048

    Article  Google Scholar 

  7. Jing C, Wei P, Sun H, Zheng N (2020) Spatiotemporal neural networks for action recognition based on joint loss. Neural Comput Appl 32(9):4293–4302

    Article  Google Scholar 

  8. Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Thirty-second AAAI conference on artificial intelligence

  9. Ren B, Liu M, Ding R, Liu H (2020) A survey on 3d skeleton-based action recognition using learning method. arXiv preprint arXiv:2002.05907

  10. Plizzari C, Cannici M, Matteucci M (2021) Spatial temporal transformer network for skeleton-based action recognition. In: International conference on pattern recognition, Springer, pp. 694–701

  11. Zhou T, Fu H, Gong C, Shen J, Shao L, Porikli F (2020) Multi-mutual consistency induced transfer subspace learning for human motion segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10277–10286

  12. Chen J, Li Z, Luo J, Xu C (2020) Learning a weakly-supervised video actor-action segmentation model with a wise selection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9901–9911

  13. Ding L, Xu C (2018) Weakly-supervised action segmentation with iterative soft boundary assignment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6508–6516

  14. Chang YL, Chan CS, Remagnino P (2021) Action recognition on continuous video. Neural Comput Appl 33(4):1233–1243

    Article  Google Scholar 

  15. Kuehne H, Arslan A, Serre T (2014) The language of actions: Recovering the syntax and semantics of goal-directed human activities. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 780–787

  16. Bojanowski P, Lajugie R, Bach F, Laptev I, Ponce J, Schmid C, Sivic J (2014) Weakly supervised action labeling in videos under ordering constraints. In: European conference on computer vision, Springer, pp. 628–643

  17. Stein S, McKenna SJ (2013) Combining embedded accelerometers with computer vision for recognizing food preparation activities. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing, pp. 729–738

  18. Sigal L, Balan AO, Black MJ (2010) Humaneva: synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. Int J Comput Vis 87(1–2):4

    Article  Google Scholar 

  19. Liu C, Hu Y, Li Y, Song S, Liu J (2017) Pku-mmd: A large scale benchmark for continuous multi-modal human action understanding. arXiv preprint arXiv:1703.07475

  20. Easwaran K, Gopalasingam Y, Green DD, Lach V, Melnyk JA, Wan C, Bartlett DJ (2021) Effectiveness of tai chi for health promotion for adults with health conditions: a scoping review of meta-analyses. Disabil Rehabil 43(21):2978–2989

    Article  Google Scholar 

  21. Yu X, Wu X, Hou G, Han P, Jiang L, Guo Q (2021) The impact of tai chi on motor function, balance, and quality of life in parkinson’s disease: a systematic review and meta-analysis. Evid Based Complement Altern Med. https://doi.org/10.1155/2021/6637612

    Article  Google Scholar 

  22. Lan C, Lai J-S, Chen S-Y (2002) Tai chi chuan. Sports Med 32(4):217–224

    Article  Google Scholar 

  23. Xu L, Wang Q, Yuan L, Ma X (2020) Using trajectory features for tai chi action recognition. In: 2020 IEEE International instrumentation and measurement technology conference (I2MTC), IEEE, pp. 1–6

  24. Geler Z, Kurbalija V, Ivanović M, Radovanović M, Dai W (2019) Dynamic time warping: Itakura vs sakoe-chiba. In: 2019 IEEE international symposium on innovations in intelligent systems and applications (INISTA), IEEE, pp. 1–6

  25. Müller M, Röder T, Clausen M (2005) Efficient content-based retrieval of motion capture data. In: ACM SIGGRAPH 2005 Papers, pp. 677–685

  26. Morris BT, Trivedi MM (2008) A survey of vision-based trajectory learning and analysis for surveillance. IEEE Trans Circuits Syst Video Technol 18(8):1114–1127

    Article  Google Scholar 

  27. Kong Y, Fu Y (2018) Human action recognition and prediction: a survey. arXiv preprint arXiv:1806.11230

  28. Cao Z, Hidalgo G, Simon T, Wei S-E, Sheikh Y (2019) Openpose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans Pattern Anal Mach Intell 43(1):172–186

    Article  Google Scholar 

  29. Fang H-S, Xie S, Tai Y-W, Lu C (2017) Rmpe: Regional multi-person pose estimation. In: Proceedings of the IEEE international conference on computer vision, pp. 2334–2343

  30. Lugaresi C, Tang J, Nash H, McClanahan C, Uboweja E, Hays M, Zhang F, Chang C-L, Yong MG, Lee J, et al (2019) Mediapipe: a framework for building perception pipelines. arXiv preprint arXiv:1906.08172

  31. Guo Y, Li Y, Shao Z (2018) Dsrf: a flexible trajectory descriptor for articulated human action recognition. Pattern Recogn 76:137–148

    Article  Google Scholar 

  32. Yi Y, Wang H (2018) Motion keypoint trajectory and covariance descriptor for human action recognition. Vis Comput 34(3):391–403

    Article  Google Scholar 

  33. Wang P, Li W, Li C, Hou Y (2018) Action recognition based on joint trajectory maps with convolutional neural networks. Knowl Based Syst 158:43–53

    Article  Google Scholar 

  34. Shao Z, Li Y (2015) Integral invariants for space motion trajectory matching and recognition. Pattern Recogn 48(8):2418–2432

    Article  MATH  Google Scholar 

  35. Yang J, Yuan J, Li Y (2015) Flexible trajectory indexing for 3d motion recognition. In: 2015 IEEE winter conference on applications of computer vision, IEEE, pp. 326–332

  36. Zhang Z, Tan T, Huang K (2010) An extended grammar system for learning and recognizing complex visual events. IEEE Trans Pattern Anal Mach Intell 33(2):240–255

    Article  Google Scholar 

  37. Yang J, Zhou X, Li Y (2015) On trajectory segmentation and description for motion recognition. In: 2015 IEEE international conference on robotics and biomimetics (ROBIO), IEEE, pp. 345–350

  38. Aghabozorgi S, Shirkhorshidi AS, Wah TY (2015) Time-series clustering-a decade review. Inf Syst 53:16–38

    Article  Google Scholar 

  39. Dong X-L, Gu C-K, Wang Z-O (2006) Research on shape-based time series similarity measure. In: 2006 International conference on machine learning and cybernetics, IEEE, pp. 1253–1258

  40. Berndt DJ, Clifford J (1994) Using dynamic time warping to find patterns in time series. In: KDD Workshop, vol. 10, pp. 359–370. Seattle

  41. Keogh EJ, Pazzani MJ (2001) Derivative dynamic time warping. In: Proceedings of the 2001 SIAM international conference on data mining, SIAM, pp. 1–11

  42. Salvador S, Chan P (2007) Toward accurate dynamic time warping in linear time and space. Intell Data Anal 11(5):561–580

    Article  Google Scholar 

  43. Silva DF, Batista GE (2016) Speeding up all-pairwise dynamic time warping matrix calculation. In: Proceedings of the 2016 SIAM international conference on data mining, SIAM, pp. 837–845

  44. Prätzlich T, Driedger J, Müller M (2016) Memory-restricted multiscale dynamic time warping. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp. 569–573

  45. Lei B, Xia Z, Jiang F, Jiang X, Ge Z, Xu Y, Qin J, Chen S, Wang T, Wang S (2020) Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal 64:101716

    Article  Google Scholar 

  46. Wang S, Chen Z, You S, Wang B, Shen Y, Lei B (2022) Brain stroke lesion segmentation using consistent perception generative adversarial network. Neural Comput Appl 34(11):8657–8669

    Article  Google Scholar 

  47. Ghosh P, Yao Y, Davis L, Divakaran A (2020) Stacked spatio-temporal graph convolutional networks for action segmentation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 576–585

  48. Adama DA, Lotfi A, Ranson R (2021) Adaptive segmentation and sequence learning of human activities from skeleton data. Expert Syst Appl 164:113836

    Article  Google Scholar 

  49. Krüger B, Vögele A, Willig T, Yao A, Klein R, Weber A (2016) Efficient unsupervised temporal segmentation of motion data. IEEE Trans Multimed 19(4):797–812

    Article  Google Scholar 

  50. Häring S, Memmesheimer R, Paulus D (2021) Action segmentation on representations of skeleton sequences using transformer networks. In: 2021 IEEE international conference on image processing (ICIP), IEEE, pp. 3053–3057

  51. Sedmidubsky J, Elias P, Budikova P, Zezula P (2021) Content-based management of human motion data: survey and challenges. IEEE Access 9:64241–64255

    Article  Google Scholar 

  52. Liu F, Wang F, Ding Y, Yang S (2021) Som-based binary coding for single sample face recognition. J Ambient Intell Human Comput 13(12):5861–5871

    Article  Google Scholar 

  53. Mazin A, Hawkins SH, Stringfield O, Dhillon J, Manley BJ, Jeong DK, Raghunand N (2021) Identification of sarcomatoid differentiation in renal cell carcinoma by machine learning on multiparametric mri. Sci Rep 11(1):1–12

    Article  Google Scholar 

  54. Mittal M, Kumar K (2016) Data clustering in wireless sensor network implemented on self organization feature map (sofm) neural network. In: 2016 International conference on computing, communication and automation (ICCCA), IEEE, pp. 202–207

  55. Vettigli G (2018) MiniSom: minimalistic and NumPy-based implementation of the Self Organizing Map. https://github.com/JustGlowing/minisom/

  56. Lea C, Flynn MD, Vidal R, Reiter A, Hager GD (2017) Temporal convolutional networks for action segmentation and detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 156–165

  57. Filtjens B, Vanrumste B, Slaets P (2022) Skeleton-based action segmentation with multi-stage spatial-temporal graph convolutional neural networks. arXiv preprint arXiv:2202.01727

  58. Farha YA, Gall J (2019) Ms-tcn: Multi-stage temporal convolutional network for action segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3575–3584

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 61876054).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leiyang Xu.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, L., Wang, Q., Lin, X. et al. Skeleton-based Tai Chi action segmentation using trajectory primitives and content. Neural Comput & Applic 35, 9549–9566 (2023). https://doi.org/10.1007/s00521-022-08185-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-08185-2

Keywords