skip to main content
10.1145/3615834.3615838acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiwoarConference Proceedingsconference-collections
research-article

Miss-placement Prediction of Multiple On-body Devices for Human Activity Recognition

Published: 11 October 2023 Publication History

Abstract

Nowadays, in industrial applications, automatic human activity recognition plays a central role. Especially human-centered activity recognition methods using on-body devices (OBDs) address situations where the identity has to be protected. However, practitioners strongly assume that end-users use OBDs correctly at deployment. In reality, this is hardly the case. Thus, there is a need for a robust activity-recognition system, either at the recording stage or at recognition stage. This contribution addresses a combination of both stages. It proposes a miss-placement recognition of OBDs on the human body when performing an activity. We deploy a limb-oriented temporal convolutional neural network to either recognize a miss-placement occurring or the type of miss-placement. Primarily results on a proposed dataset suggest that miss-placement classification is possible, which can be used for end-user feedback during recording or leveraged in data post-processing.

References

[1]
Sizhen Bian, Mengxi Liu, Bo Zhou, and Paul Lukowicz. 2022. The State-of-the-Art Sensing Techniques in Human Activity Recognition: A Survey. Sensors 22, 12 (Jan. 2022), 4596. https://doi.org/10.3390/s22124596 Number: 12 Publisher: Multidisciplinary Digital Publishing Institute.
[2]
Anindya Das Antar, Masud Ahmed, and Md Atiqur Rahman Ahad. 2019. Challenges in Sensor-based Human Activity Recognition and a Comparative Analysis of Benchmark Datasets: A Review. In 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR). 134–139. https://doi.org/10.1109/ICIEV.2019.8858508
[3]
Florenc Demrozi, Graziano Pravadelli, Azra Bihorac, and Parisa Rashidi. 2020. Human Activity Recognition Using Inertial, Physiological and Environmental Sensors: A Comprehensive Survey. IEEE Access 8 (2020), 210816–210836. https://doi.org/10.1109/ACCESS.2020.3037715
[4]
Rene Grzeszick, Jan Marius Lenk, Fernando Moya Rueda, Gernot A. Fink, Sascha Feldhorst, and Michael ten Hompel. 2017. Deep Neural Network Based Human Activity Recognition for the Order Picking Process. In Proceedings of the 4th International Workshop on Sensor-Based Activity Recognition and Interaction (Rostock, Germany) (iWOAR ’17). Association for Computing Machinery, New York, NY, USA, Article 14, 6 pages. https://doi.org/10.1145/3134230.3134231
[5]
Nils Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence(IJCAI’16, Vol. 8). AAAI Press, New York, New York, USA, 1533–1540.
[6]
Sylvia Kaczmarek, Martin Fiedler, Andreas Bongers, Sebastian Wibbeling, and Rene Grzeszick. 2023. Dataset and Methods for Recognizing Care Activities. In Proceedings of the 7th International Workshop on Sensor-Based Activity Recognition and Artificial Intelligence (Rostock, Germany) (iWOAR ’22). Association for Computing Machinery, New York, NY, USA, Article 14, 8 pages. https://doi.org/10.1145/3558884.3558891
[7]
Yeon-Wook Kim, Woo-Hyeong Cho, Kyu-Sung Kim, and Sangmin Lee. 2022. Inertial-Measurement-Unit-Based Novel Human Activity Recognition Algorithm Using Conformer. Sensors 22, 10 (2022). https://doi.org/10.3390/s22103932
[8]
Kai Kunze and Paul Lukowicz. 2008. Dealing with Sensor Displacement in Motion-Based Onbody Activity Recognition Systems. In Proceedings of the 10th International Conference on Ubiquitous Computing (Seoul, Korea) (UbiComp ’08). Association for Computing Machinery, New York, NY, USA, 20–29. https://doi.org/10.1145/1409635.1409639
[9]
Kai Kunze, Paul Lukowicz, Holger Junker, and Gerhard Tröster. 2005. Where am I: Recognizing On-body Positions of Wearable Sensors. In Location- and Context-Awareness, Thomas Strang and Claudia Linnhoff-Popien (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 264–275.
[10]
Isah Lawal and Sophia Bano. 2020. Deep Human Activity Recognition With Localisation of Wearable Sensors. IEEE Access PP (08 2020), 1–1. https://doi.org/10.1109/ACCESS.2020.3017681
[11]
Fernando Moya Rueda and Gernot A. Fink. 2021. From Human Pose to On-Body Devices for Human-Activity Recognition. In 2020 25th International Conference on Pattern Recognition (ICPR). https://doi.org/10.1109/ICPR48806.2021.9412283 tex.ids= rueda_human_2021 ISSN: 1051-4651.
[12]
Nilah Ravi Nair, Fernando Moya Rueda, Christopher Reining, and Gernot A. Fink. 2023. Multi-Channel Time-Series Person and Soft-Biometric Identification. arxiv:2304.01585 [cs.CV]
[13]
Friedrich Niemann, Christopher Reining, Fernando Moya Rueda, Nilah Ravi Nair, Janine Anika Steffens, Gernot A. Fink, and Michael ten Hompel. 2020. LARa: Creating a Dataset for Human Activity Recognition in Logistics Using Semantic Attributes. Sensors (2020). https://doi.org/10.3390/s20154083
[14]
Francisco Javier Ordóñez Morales and Daniel Roggen. 2016. Deep Convolutional Feature Transfer across Mobile Activity Recognition Domains, Sensor Modalities and Locations. In Proceedings of the 2016 ACM International Symposium on Wearable Computers. Association for Computing Machinery, Heidelberg, Germany, 92–99. https://doi.org/10.1145/2971763.2971764
[15]
Christopher Reining, Friedrich Niemann, Fernando Moya Rueda, Gernot A. Fink, and Michael ten Hompel. 2019. Human Activity Recognition for Production and Logistics—A Systematic Literature Review. Information 10, 8 (2019). https://doi.org/10.3390/info10080245
[16]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. arxiv:1409.1556 [cs.CV]
[17]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15, 56 (2014), 1929–1958. http://jmlr.org/papers/v15/srivastava14a.html
[18]
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the Importance of Initialization and Momentum in Deep Learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28 (Atlanta, GA, USA) (ICML’13). JMLR.org, III–1139–III–1147.
[19]
Timo Sztyler, Heiner Stuckenschmidt, and Wolfgang Petrich. 2017. Position-aware activity recognition with wearable devices. Pervasive and Mobile Computing 38 (2017), 281–295. https://doi.org/10.1016/j.pmcj.2017.01.008 Special Issue IEEE International Conference on Pervasive Computing and Communications (PerCom) 2016.
[20]
Terry T. Um, Franz M. J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić. 2017. Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring Using Convolutional Neural Networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (Glasgow, UK) (ICMI ’17). Association for Computing Machinery, New York, NY, USA, 216–220. https://doi.org/10.1145/3136755.3136817
[21]
Johann P. Wolff, Florian Grützmacher, Arne Wellnitz, and Christian Haubelt. [n. d.]. Activity Recognition using Head Worn Inertial Sensors. In Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction - iWOAR ’18 (Berlin, Germany, 2018). ACM Press, 1–7. https://doi.org/10.1145/3266157.3266218
[22]
Rui Xi, Mengshu Hou, Mingsheng Fu, Hong Qu, and Daibo Liu. 2018. Deep Dilated Convolution on Multimodality Time Series for Human Activity Recognition. In International Joint Conference on Neural Networks (IJCNN). IEEE, Rio de Janeiro, Brazil, 1–8. https://doi.org/10.1109/IJCNN.2018.8489540
[23]
Chengshuo Xia and Yuta Sugiura. 2021. Optimizing Sensor Position with Virtual Sensors in Human Activity Recognition System Design. Sensors 21, 20 (Jan. 2021), 6893. https://doi.org/10.3390/s21206893 Number: 20 Publisher: Multidisciplinary Digital Publishing Institute.
[24]
Rui Yao, Guosheng Lin, Qinfeng Shi, and Damith C. Ranasinghe. 2018. Efficient Dense Labelling of Human Activity Sequences from Wearables using Fully Convolutional Networks. Pattern Recognition 78 (2018), 252–266. https://doi.org/10.1016/j.patcog.2017.12.024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
iWOAR '23: Proceedings of the 8th international Workshop on Sensor-Based Activity Recognition and Artificial Intelligence
September 2023
171 pages
ISBN:9798400708169
DOI:10.1145/3615834
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 October 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Dataset
  2. Deep Learning
  3. Human Activity Recognition
  4. Multi-channel time-series
  5. On-body devices

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

iWOAR 2023

Acceptance Rates

Overall Acceptance Rate 46 of 73 submissions, 63%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 59
    Total Downloads
  • Downloads (Last 12 months)32
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media