Skip to main content

Modeling Trajectories for 3D Motion Analysis

  • Conference paper
  • First Online:
Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019)

Abstract

3D motion analysis by projecting trajectories on manifolds in a given video can be useful in different applications. In this work, we use two manifolds, Grassmann and Special Orthogonal group SO(3), to analyse accurately complex motions by projecting only skeleton data while dealing with rotation invariance. First, we project the skeleton sequence on the Grassmann manifold to model the human motion as a trajectory. Then, we introduce the second manifold SO(3) in order to consider the rotation that was ignored by the Grassmann manifold on the matched couples on this manifold. Our objective is to find the best weighted linear combination between distances in Grassmann and SO(3) manifolds according to the nature of the input motion. To validate the proposed 3D motion analysis method, we applied it in the framework of action recognition, re-identification and sport performance evaluation. Experiments on three public datasets for 3D human action recognition (G3D-Gaming, UTD-MHAD multimodal action and Florence3D-Action), on two public datasets for re-identification (IAS-Lab RGBD-ID and BIWI-Lab RGBD-ID) and on one recent dataset for throwing motion of handball players (H3DD), proved the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mitchel, T., Ruan, S., Gao, Y., Chirikjian, G.: The globally optimal reparameterization algorithm: an alternative to fast dynamic time warping for action recognition in video sequences. In: 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE (2018)

    Google Scholar 

  2. Susan, S., Mittal, M., Bansal, S., Agrawal, P.: Dynamic texture recognition from multi-offset temporal intensity co-occurrence matrices with local pattern matching. In: Verma, N., Ghosh, A. (eds.) Computational Intelligence: Theories, Applications and Future Directions-Volume II, pp. 545–555. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1135-2_41

    Chapter  Google Scholar 

  3. Wang, H., Kläser, A., Schmid, C., Cheng-lin, L.: Action recognition by dense trajectories. In: CVPR. IEEE (2011)

    Google Scholar 

  4. Islam, S., Qasim, T., Yasir, M., Bhatti, N., Mahmood, H., Zia, M.: Single-and two-person action recognition based on silhouette shape and optical point descriptors. SIViP 12(5), 853–860 (2018)

    Article  Google Scholar 

  5. Barhoumi, W.: Detection of highly articulated moving objects by using co-segmentation with application to athletic video sequences. SIViP 9(7), 1705–1715 (2015)

    Article  Google Scholar 

  6. Carey, P., Bennett, S., Lasenby, J., Purnell, T.: Aerodynamic analysis via foreground segmentation. Electron. Imaging 2017(16), 10–14 (2017)

    Article  Google Scholar 

  7. Kim, Y., Kim, D.: Real-time dance evaluation by markerless human pose estimation. Multimedia Tools Appl. 77(23), 31199–31220 (2018)

    Article  Google Scholar 

  8. Ladjailia, A., Bouchrika, I., Merouani, H., Harrati, N.: Automated detection of similar human actions using motion descriptors. In: 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA). IEEE (2015)

    Google Scholar 

  9. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  10. Barmpoutis, P., Stathaki, T., Camarinopoulos, S.: Skeleton-based human action recognition through third-order tensor representation and spatio-temporal analysis. Inventions. 4, 9 (2019)

    Article  Google Scholar 

  11. Pers, J., Bon, M., Vuckovic, G.: CVBASE 06 dataset. http://vision.fe.uni-lj.si/cvbase06/dataset.html

  12. Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26, 43–49 (1978)

    Article  MATH  Google Scholar 

  13. Chaaraoui, A., Padilla-lópez, J., Climent-pérez, P., Flórez-revuelta, F.: Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst. Appl. 41, 786–794 (2014)

    Article  Google Scholar 

  14. Han, J., Shao, L., Xu, D., Shotton, J.: Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans. Cybern. 43, 1318–1334 (2013)

    Article  Google Scholar 

  15. Chen, C., Jafari, R., Kehtarnavaz, N.: Action recognition from depth sequences using depth motion maps-based local binary patterns. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Waikoloa Beach, HI, pp. 1092–1099, January 2015

    Google Scholar 

  16. Chen, C., Liu, K., Kehtarnavaz, N.: Real-time human action recognition based on depth motion maps. J. Real-Time Image Proc. 12(1), 155–163 (2016)

    Article  Google Scholar 

  17. Oreifej, O., Liu, Z.: HoN4D: histogram of oriented 4D normals for activity recognition from depth sequences. J. Real-time Image Process. 12, 155–163 (2016)

    Article  Google Scholar 

  18. Yang, X., Tian, Y.: Super normal vector for activity recognition using depth sequences. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  19. Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.F.M.: STOP: space-time occupancy patterns for 3D action recognition from depth map sequences. In: Alvarez, L., Mejail, M., Gomez, L., Jacobo, J. (eds.) CIARP 2012. LNCS, vol. 7441, pp. 252–259. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33275-3_31

    Chapter  Google Scholar 

  20. Li, B., He, M., Cheng, X., Chen, Y., Dai, Y.: Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep CNN. In: IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE (2017)

    Google Scholar 

  21. Xia, L., Chen, C., Aggarwal, J.: View invariant human action recognition using histograms of 3D joints. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE (2012)

    Google Scholar 

  22. Thanh, T., Chen, F., Kotani, K., Le, H.: Extraction of discriminative patterns from skeleton sequences for human action recognition. In: IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future. IEEE (2012)

    Google Scholar 

  23. Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  24. Qiao, R., Liu, L., Shen, C., Vandenhengel, A.: Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition. Pattern Recogn. 66, 202–212 (2017)

    Article  Google Scholar 

  25. Hou, Y., Li, Z., Wang, P., Li, W.: Skeleton optical spectra-based action recognition using convolutional neural networks. IEEE Trans. Circ. Syst. Video Technol. 28, 807–811 (2018)

    Article  Google Scholar 

  26. Chen, K., Forbus, K.: Action recognition from skeleton data via analogical generalization. In: 30th International Workshop on Qualitative Reasoning (2017)

    Google Scholar 

  27. Ghojogh, B., Mohammadzade, H., Mokari, M.: Fisherposes for human action recognition using Kinect sensor data. IEEE Sens. J. 18, 1612–1627 (2018)

    Article  Google Scholar 

  28. Li, B., He, M., Dai, Y., Cheng, X., Chen, Y.: 3D skeleton based action recognition by video-domain translation-scale invariant mapping and multi-scale dilated CNN. Multimedia Tools Appl., 1–21 (2018)

    Google Scholar 

  29. Shahroudy, A., Wang, G., Ng, T.: Multi-modal feature fusion for action recognition in RGB-D sequences. In: 6th International Symposium on Communications, Control and Signal Processing (ISCCSP). IEEE (2014)

    Google Scholar 

  30. Elmadany, N., He, Y., Guan, L.: Information fusion for human action recognition via Biset/Multiset globality locality preserving canonical correlation analysis. IEEE Trans. Image Process. 27, 5275–5287 (2018)

    Article  MathSciNet  Google Scholar 

  31. Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., Bajcsy, R.: Berkeley MHAD: a comprehensive multimodal human action database. In: IEEE Workshop on Applications of Computer Vision (WACV). IEEE (2013)

    Google Scholar 

  32. Zhu, Y., Chen, W., Guo, G.: Fusing spatiotemporal features and joints for 3D action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2013)

    Google Scholar 

  33. Ohn-bar, E., Trivedi, M.: Joint angles similarities and HOG2 for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2013)

    Google Scholar 

  34. Shahroudy, A., Ng, T., Yang, Q., Wang, G.: Multimodal multipart learning for action recognition in depth videos. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2123–2129 (2016)

    Article  Google Scholar 

  35. Rahimi, S., Aghagolzadeh, A., Ezoji, M.: Human action recognition based on the Grassmann multi-graph embedding. Signal Image Video Process. 13, 1–9 (2018)

    Google Scholar 

  36. Rahmani, H., Bennamoun, M.: Learning action recognition model from depth and skeleton videos. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  37. Bakr, N., Crowley, J.: Histogram of oriented depth gradients for action recognition. In: The Computing Research Repository (CoRR), pp. 1801–09477 (2018)

    Google Scholar 

  38. Cherian, A., Sra, S.: Riemannian dictionary learning and sparse coding for positive definite matrices. IEEE Trans. Neural Netw. Learn. Syst. 28, 2859–2871 (2017)

    Article  MathSciNet  Google Scholar 

  39. Efros, A., Torralba, A.: Guest editorial: big data. Int. J. Comput. Vision 119, 1–2 (2016)

    Article  MathSciNet  Google Scholar 

  40. Harandi, M., Shirazi, S., Sanderson, C., Lovell, B.: Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching. In: CVPR, Colorado Springs, CO, USA, pp. 2705–2712, June 2011

    Google Scholar 

  41. Hu, H., Ma, B., Shen, J., Shao, L.: Manifold regularized correlation object tracking. IEEE Trans. Neural Netw. Learn. Syst. 29, 1786–1795 (2018)

    Article  MathSciNet  Google Scholar 

  42. Chen, X., Weng, J., Lu, W., Xu, J., Weng, J.: Deep manifold learning combined with convolutional neural networks for action recognition. IEEE Trans. Neural Netw. Learn. Syst. 29, 3938–3952 (2018)

    Article  Google Scholar 

  43. Amor, B., Su, J., Srivastava, A.: Action recognition using rate-invariant analysis of skeletal shape trajectories. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1–13 (2016)

    Article  Google Scholar 

  44. Kendall, D.: Shape manifolds, procrustean metrics, and complex projective spaces. Bull. London Math. Soc. 16, 81–121 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  45. Tanfous, A., Drira, H., Amor, B.: Coding Kendall’s shape trajectories for 3D action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  46. Slama, R., Wannous, H., Daoudi, M., Srivastava, A.: Accurate 3D action recognition using learning on the Grassmann manifold. Pattern Recogn. 48, 556–567 (2015)

    Article  Google Scholar 

  47. Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3D skeletons as points in a lie group. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  48. Devanne, M., Wannous, H., Berretti, S., Pala, P., Daoudi, M., Delbimbo, A.: 3-D human action recognition by shapshape analysis of motion trajectories on riemannian manifold. IEEE Trans. Cybern. 45, 1340–1352 (2015)

    Article  Google Scholar 

  49. Huang, Z., Wan, C., Probst, T., Vangool, L.: Deep learning on lie groups for skeleton-based action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  50. Meng, M., Drira, H., Daoudi, M., Boonaert, J.: Human-object interaction recognition by learning the distances between the object and the skeleton joints. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 7. IEEE (2015)

    Google Scholar 

  51. Meng, M., Drira, H., Boonaert, J.: Distances evolution analysis for online and off-line human object interaction recognition. Image Vis. Comput. 70, 32–45 (2018)

    Article  Google Scholar 

  52. Elaoud, A., Barhoumi, W., Drira, H., Zagrouba, E.: Analysis of skeletal shape trajectories for person re-identification. In: Blanc-Talon, J., Penne, R., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2017. LNCS, vol. 10617, pp. 138–149. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70353-4_12

    Chapter  Google Scholar 

  53. Bloom, V., Makris, D., Argyriou, V.: G3D: a gaming action dataset and real time action recognition evaluation framework. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE (2012)

    Google Scholar 

  54. Vemulapalli, R., Chellapa, R.: Rolling rotations for recognizing human actions from 3D skeletal data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  55. Wang, P., Li, Z., Hou, Y., Li, W.: Action recognition based on joint trajectory maps using convolutional neural networks. In: Proceedings of the 24th ACM International Conference on Multimedia. ACM (2016)

    Google Scholar 

  56. Seidenari, L., Varano, V., Berretti, S., Bimbo, A., Pala, P.: Recognizing actions from depth cameras as weakly aligned multi part bag-of-poses. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2013)

    Google Scholar 

  57. Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: IEEE International Conference on Image Processing (ICIP). IEEE (2015)

    Google Scholar 

  58. Elaoud, A., Barhoumi, W., Zagrouba, E., Agrebi, B.: Skeleton-based comparison of throwing motion for handball players. J. Ambient Intell. Hum. Comput., 1–13 (2019)

    Google Scholar 

  59. Lowney, C., Hsung, T., Morris, D., Khambay, B.: Quantitative dynamic analysis of the nasolabial complex using 3D motion capture: a normative data set. J. Plast. Reconstr. Aesthetic Surg. 71, 1332–1345 (2018)

    Article  Google Scholar 

  60. Stein, M., et al.: Bring it to the pitch: combining video and movement data to enhance team sport analysis. IEEE Trans. Vis. Comput. Graph. 24, 13–22 (2017)

    Article  Google Scholar 

  61. Kwon, J., Son, S., Lee, N.: Changes of kinematic parameters of lower extremities with gait speed: a 3D motion analysis study. J. Phys. Ther. Sci. 27, 477–479 (2015)

    Article  Google Scholar 

  62. Moreira, R., Magalhães, A., Oliveira, H.: A Kinect-based system for upper-body function assessment in breast cancer patients. J. Imaging 1, 134–155 (2015)

    Article  Google Scholar 

  63. Chen, X., et al.: Feasibility of using Microsoft Kinect to assess upper limb movement in type III spinal muscular atrophy patients. PLoS ONE 12, e0170472 (2017)

    Article  Google Scholar 

  64. Mirek, E., Rudzińska, M., Szczudlik, A.: The assessment of gait disorders in patients with Parkinson’s disease using the three-dimensional motion analysis system Vicon. Neurol. Neurochir. Pol. 41, 128–133 (2007)

    Google Scholar 

  65. Elaiwat, S., Bennamoun, M., Boussaïd, F.: A spatio-temporal RBM-based model for facial expression recognition. Pattern Recogn. 49, 152–161 (2016)

    Article  Google Scholar 

  66. Li, B., Mian, A., Liu, W., Krishna, A.: Using Kinect for face recognition under varying poses, expressions, illumination and disguise. In: IEEE Workshop on Applications of Computer Vision (WACV). IEEE (2013)

    Google Scholar 

  67. Saleh, Y., Edirisinghe, E.: Novel approach to enhance face recognition using depth maps. In: International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE (2016)

    Google Scholar 

  68. Nambiar, A., Bernardino, A., Nascimento, J., Fred, A.: Towards view-point invariant person re-identification via fusion of anthropometric and gait features from kinect measurements. In: VISIGRAPP (5: VISAPP) (2017)

    Google Scholar 

  69. Patruno, C., Marani, R., Cicirelli, G., Stella, E., D’orazio, T.: People re-identification using skeleton standard posture and color descriptors from RGB-D data. Pattern Recogn. 89, 77–90 (2019)

    Article  Google Scholar 

  70. Kellokumpu, V., Zhao, G., Pietikäinen, M.: Recognition of human actions using texture descriptors. Mach. Vis. Appl. 22, 767–780 (2011)

    Article  Google Scholar 

  71. Ahad, M., Islam, M., Jahan, I.: Action recognition based on binary patterns of action-history and histogram of oriented gradient. J. Multimodal User Interfaces 10, 335–344 (2016)

    Article  Google Scholar 

  72. Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. In: Tenth IEEE International Conference on Computer Vision (ICCV 2005) Volume 1, vol. 2. IEEE (2005)

    Google Scholar 

  73. Selvam, G., Gnanadurai, D.: Shape-based features for reliable action recognition using spectral regression discriminant analysis. Int. J. Sig. Imaging Syst. Eng. 9, 379–387 (2016)

    Article  Google Scholar 

  74. Yang, X., Zhang, C., Tian, Y.: Recognizing actions using depth motion maps-based histograms of oriented gradients. In: Proceedings of the 20th ACM international conference on Multimedia. ACM (2012)

    Google Scholar 

  75. Xia, L., Aggarwal, J.: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013)

    Google Scholar 

  76. Ji, X., Cheng, J., Feng, W., Tao, D.: Skeleton embedded motion body partition for human action recognition using depth sequences. Sig. Process. 143, 56–68 (2018)

    Article  Google Scholar 

  77. Jalal, A., Kim, Y., Kim, Y., Kamal, S., Kim, D.: Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recogn. 61, 295–308 (2017)

    Article  Google Scholar 

  78. Wang, K., Tobajas, P., Liu, J., Geng, T., Qian, Z., Ren, L.: Towards a 3D passive dynamic walker to study ankle and toe functions during walking motion. Rob. Auton. Syst. 115, 49–60 (2019)

    Article  Google Scholar 

  79. Nazarahari, M., Noamani, A., Ahmadian, N., Rouhani, H.: Sensor-to-body calibration procedure for clinical motion analysis of lower limb using magnetic and inertial measurement units. J. Biomech. 85, 224–229 (2019)

    Article  Google Scholar 

  80. Elaoud, A., Barhoumi, W., Drira, H., Zagrouba, E.: Weighted linear combination of distances within two manifolds for 3D human action recognition. In: VISIGRAPP (VISAPP) (2019)

    Google Scholar 

  81. Wu, S., Chen, Y., Li, X., Wu, A., You, J., Zheng, W.: An enhanced deep feature representation for person re-identification. In: IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2016)

    Google Scholar 

  82. Nambiar, A., Bernardino, A., Nascimento, J.: Shape context for soft biometrics in person re-identification and database retrieval. Pattern Recogn. Lett. 68, 297–305 (2015)

    Article  Google Scholar 

  83. Stein, M., et al.: Director’s cut: analysis and annotation of soccer matches. IEEE Comput. Graph. Appl. 36, 50–60 (2016)

    Article  Google Scholar 

  84. Wu, A., Zheng, W., Lai, J.: Robust depth-based person re-identification. IEEE Trans. Image Process. 26, 2588–2603 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  85. Preis, J., Kessel, M., Werner, M., Linnhoff-popien, C.: Gait recognition with Kinect. In: 1st International Workshop on Kinect in Pervasive Computing, New Castle, UK (2012)

    Google Scholar 

  86. Nikolaos, K., Zicheng, L., Yinpeng, C.: Person depth ReID: robust person re-identification with commodity depth sensors. Corr. abs/1705.0988 (2017)

    Google Scholar 

  87. Karianakis, N., Liu, Z., Chen, Y., Soatto, S.: Reinforced temporal attention and split-rate transfer for depth-based person re-identification. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 737–756. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_44

    Chapter  Google Scholar 

  88. Ting, H., Tan, Y., Lau, B.: Potential and limitations of Kinect for badminton performance analysis and profiling. Indian J. Sci. Technol. 9, 1–5 (2016)

    Google Scholar 

  89. Barbosa, I.B., Cristani, M., Del Bue, A., Bazzani, L., Murino, V.: Re-identification with RGB-D sensors. In: Fusiello, A., Murino, V., Cucchiara, R. (eds.) ECCV 2012. LNCS, vol. 7583, pp. 433–442. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33863-2_43

    Chapter  Google Scholar 

  90. Tsou, P., Wu, C.: Estimation of calories consumption for aerobics using Kinect based skeleton tracking. In: IEEE International Conference on Systems, Man, and Cybernetics. IEEE (2015)

    Google Scholar 

  91. Munaro, M., Basso, A., Fossati, A., Vangool, L., Menegatti, E.: 3D reconstruction of freely moving persons for re-identification with a depth sensor. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE (2014)

    Google Scholar 

  92. Munaro, M., Fossati, A., Basso, A., Menegatti, E., Van Gool, L.: One-shot person re-identification with a consumer depth camera. In: Gong, S., Cristani, M., Yan, S., Loy, C.C. (eds.) Person Re-Identification. ACVPR, pp. 161–181. Springer, London (2014). https://doi.org/10.1007/978-1-4471-6296-4_8

    Chapter  Google Scholar 

  93. Wang, J., Liu, Z., Wu, Y., Yuan, J.: Learning actionlet ensemble for 3D human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36, 914–927 (2013)

    Article  Google Scholar 

  94. Mian, A., Bennamoun, M., Owens, R.: On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. Int. J. Comput. Vision 89, 348–361 (2010)

    Article  Google Scholar 

  95. Rahmani, H., Mahmood, A., Huynh, D., Mian, A.: Histogram of oriented principal components for cross-view action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2430–2443 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amani Elaoud .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Elaoud, A., Barhoumi, W., Drira, H., Zagrouba, E. (2020). Modeling Trajectories for 3D Motion Analysis. In: Cláudio, A., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2019. Communications in Computer and Information Science, vol 1182. Springer, Cham. https://doi.org/10.1007/978-3-030-41590-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41590-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41589-1

  • Online ISBN: 978-3-030-41590-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics