skip to main content
research-article

GOAT: A Generalized Cross-Dataset Activity Recognition Framework with Natural Language Supervision

Published: 21 November 2024 Publication History

Abstract

Wearable human activity recognition faces challenges in cross-dataset generalization due to variations in device configurations and activity types across datasets. We present GOAT, a Generalized crOss-dataset Activity recogniTion framework that leverages learning with natural language supervision to address these challenges. GOAT utilizes textual attributes from activity labels and device on-body positions to enable multimodal pre-training, aligning wearable activity representations with corresponding textual representations. This approach enables GOAT to adapt to diverse device configurations and activity label spaces in downstream tasks. Our method incorporates a novel device position encoding technique, a Transformer-based activity encoder, and a cosine similarity loss function to enhance feature extraction and generalization capabilities. Extensive evaluations demonstrate GOAT's effectiveness across various scenarios, including comparisons with state-of-the-art baselines, component analysis, and zero-shot activity recognition. GOAT shows promise for advancing cross-dataset activity recognition, offering a flexible and scalable solution for diverse wearable sensing applications.

References

[1]
Alireza Abedin, Mahsa Ehsanpour, Qinfeng Shi, Hamid Rezatofighi, and Damith C Ranasinghe. 2021. Attend and discriminate: Beyond the state-of-the-art for human activity recognition using wearable sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1--22.
[2]
Barbara E Ainsworth, William L Haskell, Stephen D Herrmann, Nathanael Meckes, David R Bassett Jr, Catrine Tudor-Locke, Jennifer L Greer, Jesse Vezina, Melicia C Whitt-Glover, and Arthur S Leon. 2011. 2011 Compendium of Physical Activities: A second update of codes and MET values. Medicine & Science in Sports & Exercise 43, 8 (2011), 1575--1581.
[3]
Jacob Andreas, Karthik Narasimhan, and Aida Nematzadeh (Eds.). 2022. Proceedings of the First Workshop on Learning with Natural Language Supervision. Association for Computational Linguistics, Dublin, Ireland.
[4]
Ferhat Attal, Samer Mohammed, Mariam Dedabrishvili, Faicel Chamroukhi, Latifa Oukhellou, and Yacine Amirat. 2015. Physical human activity recognition using wearable sensors. Sensors 15, 12 (2015), 31314--31338.
[5]
Marc Bachlin, Meir Plotnik, Daniel Roggen, Inbal Maidan, Jeffrey M Hausdorff, Nir Giladi, and Gerhard Troster. 2009. Wearable assistant for Parkinson's disease patients with the freezing of gait symptom. IEEE Transactions on Information Technology in Biomedicine 14, 2 (2009), 436--446.
[6]
Dmitrijs Balabka. 2019. Semi-supervised learning for human activity recognition using adversarial autoencoders. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the ACM International Symposium on Wearable Computers. 685--688.
[7]
Oresti Baños, Miguel Damas, Héctor Pomares, Ignacio Rojas, Máté Attila Tóth, and Oliver Amft. 2012. A benchmark dataset to evaluate sensor displacement in activity recognition. In Proceedings of the ACM Conference on Ubiquitous Computing. 1026--1035.
[8]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. Computing Surveys 46, 3 (2014), 1--33.
[9]
S Chan Chang, R Walmsley, J Gershuny, T Harms, E Thomas, K Milton, P Kelly, C Foster, A Wong, N Gray, et al. 2021. Capture-24: Activity tracker dataset for human activity recognition.
[10]
Kaixuan Chen, Dalin Zhang, Lina Yao, Bin Guo, Zhiwen Yu, and Yunhao Liu. 2021. Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities. Computing Surveys 54, 4 (2021), 1--40.
[11]
Ling Chen, Rong Hu, Menghan Wu, and Xin Zhou. 2023. HMGAN: A hierarchical multi-modal generative adversarial network model for wearable human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 3 (2023), 1--27.
[12]
Giuseppe De Leonardis, Samanta Rosati, Gabriella Balestra, Valentina Agostini, Elisa Panero, Laura Gastaldi, and Marco Knaflitz. 2018. Human activity recognition by wearable sensors: Comparison of different classifiers for real-time applications. In IEEE International Symposium on Medical Measurements and Applications. 1--6.
[13]
Shohreh Deldari, Hao Xue, Aaqib Saeed, Daniel V Smith, and Flora D Salim. 2022. COCOA: Cross modality contrastive learning for sensor data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--28.
[14]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171--4186.
[15]
Sourish Gunesh Dhekane and Thomas Ploetz. 2024. Transfer learning in human activity recognition: A survey. ArXiv preprint abs/2401.10185 (2024). https://arxiv.org/abs/2401.10185
[16]
Iveta Dirgová Luptáková, Martin Kubovčík, and Jiří Pospíchal. 2022. Wearable sensor-based human activity recognition with transformer model. Sensors 22, 5 (2022), 1911.
[17]
Aiden Doherty, Dan Jackson, Nils Hammerla, Thomas Plötz, Patrick Olivier, Malcolm H Granat, Tom White, Vincent T Van Hees, Michael I Trenell, Christoper G Owen, et al. 2017. Large scale population assessment of physical activity using wrist worn accelerometers: the UK biobank study. PloS one 12, 2 (2017), e0169649.
[18]
Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023. Self-collaboration code generation via ChatGPT. ACM Trans. Softw. Eng. Methodol. (2023). https://doi.org/10.1145/3672459 Just Accepted.
[19]
Sannara Ek, François Portet, and Philippe Lalanda. 2023. Transformer-based models to deal with heterogeneous environments in human activity recognition. Personal and Ubiquitous Computing 27, 6 (2023), 2267--2280.
[20]
Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. 2023. CLAP: Learning audio concepts from natural language supervision. In IEEE International Conference on Acoustics, Speech and Signal Processing. 1--5.
[21]
Lin Fan, Zhongmin Wang, and Hai Wang. 2013. Human activity recognition model based on decision tree. In 2013 International Conference on Advanced Cloud and Big Data. 64--68.
[22]
Nan Gao, Wei Shao, Mohammad Saiedur Rahaman, and Flora D Salim. 2020. N-gage: Predicting in-class emotional, behavioural and cognitive engagement in the wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1--26.
[23]
Shanghua Gao, Teddy Koker, Owen Queen, Thomas Hartvigsen, Theodoros Tsiligkaridis, and Marinka Zitnik. 2024. UniTS: A unified multi-task time series model. Proceedings of the Advances in Neural Information Processing Systems.
[24]
Jonathan Gershuny, Teresa Harms, Aiden Doherty, Emma Thomas, Karen Milton, Paul Kelly, and Charlie Foster. 2020. Testing self-report time-use diaries against objective instruments in real time. Sociological Methodology 50, 1 (2020), 318--349.
[25]
Hristijan Gjoreski, Mathias Ciliberto, Lin Wang, Francisco Javier Ordonez Morales, Sami Mekki, Stefan Valentin, and Daniel Roggen. 2018. The university of sussex-huawei locomotion and transportation dataset for multimodal analytics with mobile devices. IEEE Access 6 (2018), 42592--42604.
[26]
Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia Pinel, Makarand Tapaswi, Ivan Laptev, and Cordelia Schmid. 2023. Instruction-driven history-aware policies for robotic manipulations. In Conference on Robot Learning. 175--187.
[27]
Harish Haresamudram, David V Anderson, and Thomas Plötz. 2019. On the role of features in human activity recognition. In Proceedings of the International Symposium on Wearable Computers. 78--88.
[28]
Harish Haresamudram, Apoorva Beedu, Varun Agrawal, Patrick L Grady, Irfan Essa, Judy Hoffman, and Thomas Plötz. 2020. Masked reconstruction based self-supervision for human activity recognition. In Proceedings of the International Symposium on Wearable Computers. 45--49.
[29]
Harish Haresamudram, Irfan Essa, and Thomas Ploetz. 2024. Towards learning discrete representations via self-supervision for wearables-based human activity recognition. Sensors 24, 4 (2024), 1238.
[30]
Harish Haresamudram, Irfan Essa, and Thomas Plötz. 2021. Contrastive predictive coding for human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--26.
[31]
Harish Haresamudram, Irfan Essa, and Thomas Plötz. 2022. Assessing the state of self-supervised human activity recognition using wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--47.
[32]
Harish Haresamudram, Irfan Essa, and Thomas Plötz. 2023. Investigating enhancements to contrastive predictive coding for human activity recognition. In IEEE International Conference on Pervasive Computing and Communications. 232--241.
[33]
Alexander Hoelzemann and Kristof Van Laerhoven. 2020. Digging deeper: Towards a better understanding of transfer learning for human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 50--54.
[34]
Zhiqing Hong, Zelong Li, Shuxin Zhong, Wenjun Lyu, Haotian Wang, Yi Ding, Tian He, and Desheng Zhang. 2024. CrossHAR: Generalizing cross-dataset human activity recognition via hierarchical self-supervised pretraining. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, 2 (2024), 1--26.
[35]
Sijie Ji, Xinzhe Zheng, and Chenshu Wu. 2024. HARGPT: Are LLMs zero-shot human activity recognizers?. In IEEE International Workshop on Foundation Models for Cyber-Physical Systems & Internet of Things.
[36]
Woosub Jung, Amanda Watson, Scott Kuehn, Erik Korem, Ken Koltermann, Minglong Sun, Shuangquan Wang, Zhenming Liu, and Gang Zhou. 2021. LAX-Score: Quantifying team performance in lacrosse and exploring imu features towards performance enhancement. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 3 (2021), 1--28.
[37]
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. 1--15.
[38]
Cassim Ladha, Nils Y Hammerla, Patrick Olivier, and Thomas Plötz. 2013. ClimbAX: Skill assessment for climbing enthusiasts. In Proceedings of the ACM International Joint Conference on Pervasive and ubiquitous computing. 235--244.
[39]
Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, and Thomas Plötz. 2024. IMUGPT 2.0: Language-based cross modality transfer for sensor-based human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, 3 (2024), 1--32.
[40]
Zikang Leng, Hyeokhyen Kwon, and Thomas Plötz. 2023. Generating virtual on-body accelerometer data from virtual textual descriptions for human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 39--43.
[41]
Zikang Leng, Hyeokhyen Kwon, and Thomas Plötz. 2023. On the benefit of generative foundation models for human activity recognition. In Generative AI for Pervasive Computing Symposium within UbiComp/ISWC. 1--4.
[42]
Dongxin Liu and Tarek Abdelzaher. 2021. Semi-supervised contrastive learning for human activity recognition. In International Conference on Distributed Computing in Sensor Systems. 45--53.
[43]
Shengzhong Liu, Shuochao Yao, Yifei Huang, Dongxin Liu, Huajie Shao, Yiran Zhao, Jinyang Li, Tianshi Wang, Ruijie Wang, Chaoqi Yang, et al. 2020. Handling missing sensors in topology-aware iot applications with gated graph neural network. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1--31.
[44]
Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, and Xin Qin. 2022. Semantic-discriminative mixup for generalizable sensor-based cross-domain activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (2022), 1--19.
[45]
Moe Matsuki, Paula Lago, and Sozo Inoue. 2019. Characterizing word embeddings for zero-shot sensor-based human activity recognition. Sensors 19, 22 (2019), 5043.
[46]
Meta 2024. Introducing Meta Llama 3: The most capable openly available LLM to date. Retrieved May 4, 2024 from https://ai.meta.com/blog/meta-llama-3/
[47]
Shenghuan Miao, Ling Chen, and Rong Hu. 2023. Spatial-temporal masked autoencoder for multi-device wearable human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 4 (2023), 1--25.
[48]
Shenghuan Miao, Ling Chen, Rong Hu, and Yingsong Luo. 2022. Towards a dynamic inter-sensor correlations learning framework for multi-sensor-based wearable human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--25.
[49]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Proceedings of the Advances in Neural Information Processing Systems 26 (2013).
[50]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. ArXiv preprint abs/1807.03748 (2018). https://arxiv.org/abs/1807.03748
[51]
OpenAI 2022. Introducing ChatGPT. Retrieved Dec 18, 2023 from https://openai.com/blog/chatgpt
[52]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
[53]
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1532--1543.
[54]
Riccardo Presotto, Sannara Ek, Gabriele Civitarese, François Portet, Philippe Lalanda, and Claudio Bettini. 2023. Combining public human activity recognition datasets to mitigate labeled data scarcity. In IEEE International Conference on Smart Computing. 33--40.
[55]
Xin Qin, Yiqiang Chen, Jindong Wang, and Chaohui Yu. 2019. Cross-dataset activity recognition via adaptive spatial-temporal transfer learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 4 (2019), 1--25.
[56]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. 8748--8763.
[57]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[58]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning. 8821--8831.
[59]
Attila Reiss and Didier Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the International Symposium on Wearable Computers. 108--109.
[60]
Daniel Roggen, Alberto Calatroni, Mirco Rossi, Thomas Holleczek, Kilian Förster, Gerhard Tröster, Paul Lukowicz, David Bannach, Gerald Pirkl, Alois Ferscha, et al. 2010. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of the International Conference on Networked Sensing Systems. 233--240.
[61]
Aaqib Saeed, Tanir Ozcelebi, and Johan Lukkien. 2019. Multi-task self-supervised learning for human activity detection. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 1--30.
[62]
Thomas Stiefmeier, Daniel Roggen, Georg Ogris, Paul Lukowicz, and Gerhard Tröster. 2008. Wearable activity tracking in car manufacturing. IEEE Pervasive Computing 7, 2 (2008), 42--50.
[63]
Sungho Suh, Vitor Fortes Rey, and Paul Lukowicz. 2023. TASKED: Transformer-based adversarial learning for human activity recognition using wearable sensors via self-knowledge distillation. Knowledge-Based Systems 260 (2023), 110143.
[64]
Timo Sztyler and Heiner Stuckenschmidt. 2016. On-body localization of wearable devices: An investigation of position-aware activity recognition. In Proceedings of the International Conference on Pervasive Computing and Communications. 1--9.
[65]
Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, Soren Brage, Nick Wareham, and Cecilia Mascolo. 2021. Selfhar: Improving human activity recognition through self-training with unlabeled data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1--30.
[66]
Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, and Cecilia Mascolo. 2020. Exploring contrastive learning in human activity recognition for healthcare. In Machine Learning for Mobile Health Workshop at NeurIPS. 1--6.
[67]
Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. 2022. Motionclip: Exposing human motion generation to clip space. In European Conference on Computer Vision. Springer, 358--374.
[68]
Catherine Tong, Jinchen Ge, and Nicholas D Lane. 2021. Zero-shot learning for imu-based activity recognition using video embeddings. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (2021), 1--23.
[69]
Md Taufeeq Uddin and Md Azher Uddiny. 2015. Human activity recognition from wearable sensors using extremely randomized trees. In International Conference on Electrical Engineering and Information Communication Technology. 1--6.
[70]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. 5998--6008.
[71]
Michalis Vrigkas, Christophoros Nikou, and Ioannis A Kakadiaris. 2015. A review of human activity recognition methods. Frontiers in Robotics and AI 2 (2015), 28.
[72]
Chongyang Wang, Yuan Gao, Akhil Mathur, Amanda C De C. Williams, Nicholas D Lane, and Nadia Bianchi-Berthouze. 2021. Leveraging activity recognition to enable protective behavior detection in continuous data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--27.
[73]
Wei Wang, Chunyan Miao, and Shuji Hao. 2017. Zero-shot human activity recognition via nonlinear compatibility based method. In Proceedings of the International Conference on Web Intelligence. 322--330.
[74]
Matthew Willetts, Sven Hollowell, Louis Aslett, Chris Holmes, and Aiden Doherty. 2018. Statistical machine learning of sleep and physical activity phenotypes from sensor data in 96,220 UK Biobank participants. Scientific Reports 8, 1 (2018), 7961.
[75]
Shangda Wu, Dingyao Yu, Xu Tan, and Maosong Sun. 2023. Clamp: Contrastive language-music pre-training for cross-modal symbolic music information retrieval. In Proceedings of the International Society for Music Information Retrieval Conference. 157--165.
[76]
Kang Xia, Wenzhong Li, Shiwei Gan, and Sanglu Lu. 2024. TS2ACT: Few-shot human activity sensing with cross-modal co-learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 4 (2024), 1--22.
[77]
Qingxin Xia, Joseph Korpela, Yasuo Namioka, and Takuya Maekawa. 2020. Robust unsupervised factory activity recognition with body-worn accelerometer using temporal structure of multiple sensor data motifs. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1--30.
[78]
Qingxin Xia, Takuya Maekawa, and Takahiro Hara. 2023. Unsupervised human activity recognition through two-stage prompting with chatGPT. ArXiv preprint abs/2306.02140 (2023). https://arxiv.org/abs/2306.02140
[79]
Frank F Xu, Bogdan Vasilescu, and Graham Neubig. 2022. In-ide code generation from natural language: Promise and challenges. ACM Transactions on Software Engineering and Methodology 31, 2 (2022), 1--47.
[80]
Huatao Xu, Pengfei Zhou, Rui Tan, Mo Li, and Guobin Shen. 2021. LIMU-BERT: Unleashing the potential of unlabeled data for IMU sensing applications. In Proceedings of the ACM Conference on Embedded Networked Sensor Systems. 220--233.
[81]
Jilan Xu, Junlin Hou, Yuejie Zhang, Rui Feng, Yi Wang, Yu Qiao, and Weidi Xie. 2023. Learning open-vocabulary semantic segmentation models from natural language supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2935--2944.
[82]
Katsu Yamane and Yoshihiko Nakamura. 2003. Natural motion animation through constraining and deconstraining at will. IEEE Transactions on Visualization and Computer Graphics 9, 3 (2003), 352--360.
[83]
Shuochao Yao, Shaohan Hu, Yiran Zhao, Aston Zhang, and Tarek Abdelzaher. 2017. DeepSense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the International Conference on World Wide Web. 351--360.
[84]
Alexander D Young, Martin J Ling, and Damal K Arvind. 2011. IMUSim: A simulation environment for inertial sensing algorithm design and evaluation. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks. 199--210.
[85]
Hang Yuan, Shing Chan, Andrew P Creagh, Catherine Tong, Aidan Acquah, David A Clifton, and Aiden Doherty. 2024. Self-supervised learning for human activity recognition using 700,000 person-days of wearable data. NPJ digital medicine 7, 1 (2024), 91.
[86]
Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. 2023. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14730--14740.
[87]
Xiyuan Zhang, Ranak Roy Chowdhury, Jiayun Zhang, Dezhi Hong, Rajesh K Gupta, and Jingbo Shang. 2023. Unleashing the power of shared label structures for human activity recognition. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 3340--3350.
[88]
Ye Zhang, Longguang Wang, Huiling Chen, Aosheng Tian, Shilin Zhou, and Yulan Guo. 2022. IF-ConvTransformer: A framework for human activity recognition using IMU fusion and ConvTransformer. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 2 (2022), 1--26.
[89]
Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, et al. 2023. RT-2: Vision-language-action models transfer web knowledge to robotic control. In Conference on Robot Learning. 2165--2183.
[90]
Muhammad Zubair, Kibong Song, and Changwoo Yoon. 2016. Human activity recognition using wearable accelerometer sensors. In IEEE International Conference on Consumer Electronics-Asia. 1--5.
[91]
Si Zuo, Vitor Fortes, Sungho Suh, Stephan Sigg, and Paul Lukowicz. 2023. Unsupervised diffusion model for sensor-based human activity recognition. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing & the ACM International Symposium on Wearable Computing. 205--205.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 8, Issue 4
December 2024
1788 pages
EISSN:2474-9567
DOI:10.1145/3705705
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 November 2024
Published in IMWUT Volume 8, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. human activity recognition
  2. pre-trained model
  3. representation learning
  4. wearable sensors

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 218
    Total Downloads
  • Downloads (Last 12 months)218
  • Downloads (Last 6 weeks)156
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media