Abstract
The widespread adoption of deep learning in the computer science field has significantly improved the functionality of wearable sensors, such as the recognition and localization of human activities. Nevertheless, the challenge of annotating and training sensor data persists due to the high associated costs. Unlabeled sensor data is more accessible and easier to train compared to labeled data, which has led to increased interest in self-supervised learning for human activity recognition. Masked reconstruction of raw sensor data is a method commonly employed in self-supervised learning. When applied to human activity recognition, the technique involves time-centric data masking and subsequent reconstruction. However, the masking and reconstruction of raw sensor data may potentially lead to the exclusion of crucial information, resulting in representations with lower semantic levels. To address this, we present a new strategy for masking and reconstruction, called Human Activity Recognition with Feature Masking and Reconstruction (HARFMR), specifically designed for human activity recognition. This architecture includes the masking of features using a random ratio and the subsequent reconstruction of the original sensor data, compelling the encoder to emphasize the contextual correlations of the data’s features and the properties of the features during the reconstruction process. Our evaluation of the proposed masking strategy on three public datasets demonstrates that the HARFMR method surpasses existing masking reconstruction schemes under self-supervised and semi-supervised settings.
References
Zhang, M., Sawchuk, A.A.: USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 1036-1043 (2012)
Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L., et al.: A public domain dataset for human activity recognition using smartphones. In: Esann, vol. 3, p. 3 (2013)
Reiss, A., Stricker, D.: Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th International Symposium on Wearable Computers, pp. 108–109. IEEE (2012)
Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 12(2), 74–82 (2011)
Malekzadeh, M., Clegg, R.G., Cavallaro, A., Haddadi, H.: Protecting sensory data against sensitive inferences. In: Proceedings of the 1st Workshop on Privacy by Design in Distributed Systems, pp. 1–6 (2018)
Trung, T.Q., Lee, N.-E.: Flexible and stretchable physical sensor integrated platforms for wearable human-activity monitoring and personal healthcare. Adv. Mater. 28(22), 4338–4372 (2016)
Hsu, Y.-L., Yang, S.-C., Chang, H.-C., Lai, H.-C.: Human daily and sport activity recognition using a wearable inertial sensor network. IEEE Access 6, 31715–31728 (2018)
Du, Y., Lim, Y., Tan, Y.: A novel human activity recognition and prediction in smart home based on interaction. Sensors 19(20), 4474 (2019)
Bianchi, V., et al.: IoT wearable sensor and deep learning: An integrated approach for personalized human activity recognition in a smart home environment. IEEE Internet Things J. 6(5), 8553–8562 (2019)
Ronao, C.A., Cho, S.-B.: Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 59, 235–244 (2016)
Wan, S., Qi, L., Xu, X., Tong, C., Gu, Z.: Deep learning models for real-time human activity recognition with smartphones. Mob. Netw. App. 25, 743–755 (2020)
Xia, K., Huang, J., Wang, H.: LSTM-CNN architecture for human activity recognition. IEEE Access 8, 56855–56866 (2020)
Polu, S.K., Polu, S.K.: Human activity recognition on smartphones using machine learning algorithms. Int. J. Innov. Res. Sci. Technol. 5(6), 31–37 (2018)
Banos, O., et al.: Window size impact in human activity recognition. Sensors 14(4), 6474–6499 (2014)
Tang, C.I., et al.: Exploring contrastive learning in human activity recognition for healthcare. arXiv preprint arXiv:2011.11542 (2020)
Khaertdinov, B., Ghaleb, E., Asteriadis, S.: Contrastive self-supervised learning for sensor-based human activity recognition. In: 2021 IEEE International Joint Conference on Biometrics (IJCB). IEEE (2021)
Wang, J., et al.: Negative selection by clustering for contrastive learning in human activity recognition. IEEE Internet Things J. 10, 10833–10844 (2023)
Haresamudram, H., et al.: Masked reconstruction based self-supervision for human activity recognition. In: Proceedings of the 2020 ACM International Symposium on Wearable Computers (2020)
Xu, H., et al.: Limu-Bert: Unleashing the potential of unlabeled data for IMU sensing applications. In: Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems (2021)
Wang, J., et al.: Sensor data augmentation by resampling in contrastive learning for human activity recognition. IEEE Sens. J. 22(23), 22994–23008 (2022)
Qian, H., Tian, T., Miao, C.: What makes good contrastive learning on small-scale wearable-based tasks?. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022)
Saeed, A., Ozcelebi, T., Lukkien, J.: Multi-task self-supervised learning for human activity detection. Proc. ACM Interact. Mob. Wear. Ubiquit. Technol. 3(2), 1–30 (2019)
Assran, M., et al.: Self-supervised learning from images with a joint-embedding predictive architecture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023)
Akhavian, R., Brito, L., Behzadan, A.: Integrated mobile sensor-based activity recognition of construction equipment and human crews. In: Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure (2015)
Wang, Y., Cang, S., Hongnian, Yu.: A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 137, 167–190 (2019)
Sunny, J.T., et al.: Applications and challenges of human activity recognition using sensors in a smart environment. IJIRST Int. J. Innov. Res. Sci. Technol. 2, 50–57 (2015)
Jaramillo, I.E., et al.: Human activity prediction based on forecasted IMU activity signals by sequence-to-sequence deep neural networks. Sensors 23(14), 6491 (2023)
Nair, N., Thomas, C., Jayagopi, D.B.: Human activity recognition using temporal convolutional network. In: Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction (2018)
Bi, H., et al.: An active semi-supervised deep learning model for human activity recognition. J. Amb. Intell. Human. Comput. 14(10), 13049–13065 (2023)
Chen, T., et al.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning. PMLR (2020)
He, K., et al.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)
He, K., et al.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 IFIP International Federation for Information Processing
About this paper
Cite this paper
Cui, W., Chen, Y., Huang, Y., Liu, C., Zhu, T. (2024). HARFMR: Human Activity Recognition with Feature Masking and Reconstruction. In: Shi, Z., Torresen, J., Yang, S. (eds) Intelligent Information Processing XII. IIP 2024. IFIP Advances in Information and Communication Technology, vol 704. Springer, Cham. https://doi.org/10.1007/978-3-031-57919-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-57919-6_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-57918-9
Online ISBN: 978-3-031-57919-6
eBook Packages: Computer ScienceComputer Science (R0)