Abstract:
Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent rese...Show MoreMetadata
Abstract:
Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent research has employed hybrid models to better leverage the modal information of sensors and temporal information, enabling improved performance for wearable human activity recognition. Nevertheless, the lack of effective exploitation of human structural information and limited capacity for cross-channel fusion remains a major challenge. This study proposes a generic design, called GT-WHAR, to accommodate the varying application scenarios and datasets while performing effective feature extraction and fusion. Firstly, a novel and unified representation paradigm, namely Body-Sensing Graph Representation, has been proposed to represent body movement by a graph set, which incorporates structural information by considering the intrinsic connectivity of the skeletal structure. Secondly, the newly designed Body-Node Attention Graph Network employs graph neural networks to extract and fuse the cross-channel information within the graph set. Eventually, the graph network has been embedded in the proposed Bidirectional Temporal Learning Network, facilitating the extraction of temporal information in conjunction with the learned structural features. GT-WHAR outperformed the state-of-the-art methods in extensive experiments conducted on benchmark datasets, proving its validity and efficacy. Besides, we have demonstrated the generality of the framework through multiple research questions and provided an in-depth investigation of various influential factors.
Published in: IEEE Transactions on Emerging Topics in Computational Intelligence ( Volume: 8, Issue: 6, December 2024)