Human Activity Recognition From Multi-modal Wearable Sensor Data Using Deep Multi-stage LSTM Architecture Based on Temporal Feature Aggregation | IEEE Conference Publication | IEEE Xplore

Human Activity Recognition From Multi-modal Wearable Sensor Data Using Deep Multi-stage LSTM Architecture Based on Temporal Feature Aggregation


Abstract:

Activity recognition from wearable sensors is a promising field of research with a wide variety of applications to track human activity from distant positions. In this pa...Show More

Abstract:

Activity recognition from wearable sensors is a promising field of research with a wide variety of applications to track human activity from distant positions. In this paper, a multi-stage long short term memory (LSTM) based deep neural network is proposed to integrate multimodal features from numerous sensors for activity recognition. In the first stage, for separately extracting effective temporal features from each sensor, an individual stack of LSTM layers are introduced on each sensor data. Afterward, extracted features from numerous sensors are aggregated maintaining their temporal dependency. Finally, for joint optimization of the aggregated multimodal features, a global feature optimizer network is proposed consisting of multiple LSTM layers followed by series of densely connected layers that extracts the global features through the fusion of multimodal features. Extensive experimentations on a publicly available dataset provide very satisfactory performance with an average F1 score of 83.9%.
Date of Conference: 09-12 August 2020
Date Added to IEEE Xplore: 02 September 2020
ISBN Information:

ISSN Information:

Conference Location: Springfield, MA, USA

Contact IEEE to Subscribe

References

References is not available for this document.