Skip to main content

Advertisement

Log in

A Deep Learning-Based Approach to Detect Correct Suryanamaskara Pose

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

We present a technique to analyse Suryanamaskar poses using keypoint estimation and statistical analysis. The proposed approach uses a trained model based on COCO keypoint detection dataset and uses it to determine keypoints in yoga poses. Our work uses the keypoint detection to suggest a self yoga correction system. A novel dataset, Surya-yoga, containing 10000 Suryanamaskara poses has been generated and made publicly available. The model presented in this paper performed better on the COCO dataset and combined COCO and Surya-yoga dataset when tested using part affinity fields. The work also presents an analytical method of distinguishing different Suryanamaskar poses alongside deep learning methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Newell, Z. The ancient origins of surya namaskar sun salutation. https://yogainternational.com/article/view/the-ancient-origins-of-surya-namaskar-sun-salutation

  2. Pratinidhi SBP. The Ten-point Way to Health. Dent. 1938.

  3. Kidambi S. Aditya Hridayam Stotram. https://www.sanskritimagazine.com/hymns-and-stotras/aditya-hridayam-stotram-heart-aditya-sun/

  4. Devananda SV. The complete illustrated book of yoga. Harmony. 2011.

  5. Papandreou G, Zhu T, Chen LC, Gidaris S, Tompson J, Murphy K. Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In Proceedings of the European Conference on Computer Vision (ECCV) 2018. (pp. 269-286).

  6. Newell A, Yang K, & Deng J. Stacked hourglass networks for human pose estimation. In European conference on computer vision 2016, October. (pp. 483-499). Springer, Cham.

  7. Fischler MA, Elschlager RA. The representation and matching of pictorial structures. IEEE Trans Comput. 1973;100(1):67–92.

    Article  Google Scholar 

  8. Felzenszwalb P, McAllester D, Ramanan D. A discriminatively trained, multiscale, deformable part model. In 2008 IEEE conference on computer vision and pattern recognition 2008, June. (pp. 1-8). IEEE.

  9. Toshev A, Szegedy C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition 2014. (pp. 1653-1660).

  10. TY. et al. Microsoft COCO: Common Objects in Context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds). Computer Vision - ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8693. 2014; Springer, Cham. https://doi.org/10.1007/978-3-319-10602-1_48.

  11. Sigal L, Balan AO, Black MJ. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. Int J Comput Vision. 2010;87(1−2):4.

    Article  Google Scholar 

  12. Varol G, Romero J, Martin X, Mahmood N, Black MJ, Laptev I, Schmid C. Learning from synthetic humans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017. pp. 109−117.

  13. He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In European conference on computer vision 2016, October. (pp. 630−645). Springer, Cham.

  14. Keras documentation: Keras Applications 2021. https://Keras.Io/.

  15. Kocabas M, Karagoz S, Akbas E. Multiposenet: Fast multi-person pose estimation using pose residual network. In Proceedings of the European conference on computer vision (ECCV). 2018. pp. 417−433.

  16. Cao Z, Simon T, Wei SE, Sheikh Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition 2017.(pp. 7291−7299).

  17. Padilla R, Netto SL, da Silva EA. A survey on performance metrics for object-detection algorithms. In 2020 International Conference on Systems, Signals and Image Processing (IWSSIP) 2020, July. (pp. 237−242). IEEE.

  18. Mehta D, Sotnychenko O, Mueller F, Xu W, Sridhar S, Pons-Moll G, Theobalt C. Single-shot multi-person 3d pose estimation from monocular rgb. In 2018 International Conference on 3D Vision (3DV) 2018, September. (pp. 120−130). IEEE.

  19. Huang Y, Sun B, Kan H, Zhuang J, Qin Z. FollowMeUp Sports: New benchmark for 2D human keypoint recognition. In Chinese Conference on Pattern Recognition and Computer Vision (PRCV) 2019, November; (pp. 110−121). Springer, Cham.

  20. Keras documentation: Keras Applications 2021. https://Keras.Io/.

  21. Yadav SK, Singh A, Gupta A, Raheja JL. Real-time Yoga recognition using deep learning. Neural Comput Appl. 2019;31(12):9349−61.

    Article  Google Scholar 

  22. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, FeiFei L. Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014. pp. 1725−1732.

  23. Varol G, Laptev I, Schmid C. Long-term temporal convolutions for action recognition. IEEE Trans Patttern Anal Mach Intell. 2017;40(6):1510−7.

    Article  Google Scholar 

  24. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence. 2017.

  25. Wen L, Li X, Gao L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Computing & Applications. 2020;32(10).

  26. He K, Gkioxari G, Dollár P, & Girshick, R. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision 2017. pp. 2961−2969.

  27. Sengupta A, Ye Y, Wang R, Liu C, Roy K. Going deeper in spiking neural networks: VGG and residual architectures. Front Neurosci. 2019;13:95.

    Article  Google Scholar 

  28. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Summers RM. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285−98.

    Article  Google Scholar 

  29. Davis J, Goadrich M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd international conference on Machine learning 2006. pp. 233−240.

  30. Bhaumik U, Chatterjee S, Kumar Singh K. Suryanamaskar Pose Identification and Estimation Using No Code Computer Vision. In: Bajpai MK, Kumar Singh K, Giakos G. (eds) Machine Vision and Augmented Intelligence-Theory and Applications. Lecture Notes in Electrical Engineering, 2021; vol 796. Springer, Singapore. https://doi.org/10.1007/978-981-16-5078-9-_7.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Koushlendra Kumar Singh.

Ethics declarations

Conflict of Interest

Ujjayanta Bhaumik, Koushlendra Kumar Singh, Akbar Sheikh Akbari, and Manish K Bajpa declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advances in Machine Vision and Augmented Intelligence” guest edited by Manish Kumar Bajpai, Ranjeet Kumar, Koushlendra Kumar Singh and George Giakos.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhaumik, U., Singh, K.K., Akbari, A.S. et al. A Deep Learning-Based Approach to Detect Correct Suryanamaskara Pose. SN COMPUT. SCI. 3, 337 (2022). https://doi.org/10.1007/s42979-022-01226-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-022-01226-6

Keywords