loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Ahmed Snoun ; Tahani Bouchrika and Olfa Jemai

Affiliation: Research Team in Intelligent Machines (RTIM), National Engineering School of Gabes (ENIG), University of Gabes, Gabes, Tunisia

Keyword(s): Human Activity Recognition, 3D Skeleton, Spatio-temporal Features, View-invariant, Transformer.

Abstract: With the emergence of depth sensors, real-time 3D human skeleton estimation have become easier to accomplish. Thus, methods for human activity recognition (HAR) based on 3D skeleton have become increasingly accessible. In this paper, we introduce a new approach for human activity recognition using 3D skeletal data. Our approach generates a set of spatio-temporal and view-invariant features from the skeleton joints. Then, the extracted features are analyzed using a typical Transformer encoder in order to recognize the activity. In fact, Transformers, which are based on self-attention mechanism, have been successful in many domains in the last few years, which makes them suitable for HAR. The proposed approach shows promising performance on different well-known datasets that provide 3D skeleton data, namely, KARD, Florence 3D, UTKinect Action 3D and MSR Action 3D.

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.17.6.75

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Snoun, A.; Bouchrika, T. and Jemai, O. (2022). View-invariant 3D Skeleton-based Human Activity Recognition based on Transformer and Spatio-temporal Features. In Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-549-4; ISSN 2184-4313, SciTePress, pages 706-715. DOI: 10.5220/0010895300003122

@conference{icpram22,
author={Ahmed Snoun. and Tahani Bouchrika. and Olfa Jemai.},
title={View-invariant 3D Skeleton-based Human Activity Recognition based on Transformer and Spatio-temporal Features},
booktitle={Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2022},
pages={706-715},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010895300003122},
isbn={978-989-758-549-4},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - View-invariant 3D Skeleton-based Human Activity Recognition based on Transformer and Spatio-temporal Features
SN - 978-989-758-549-4
IS - 2184-4313
AU - Snoun, A.
AU - Bouchrika, T.
AU - Jemai, O.
PY - 2022
SP - 706
EP - 715
DO - 10.5220/0010895300003122
PB - SciTePress