skip to main content
10.1145/3286978.3287024acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmobiquitousConference Proceedingsconference-collections
research-article

Deep Auto-Set: A Deep Auto-Encoder-Set Network for Activity Recognition Using Wearables

Published: 05 November 2018 Publication History

Abstract

Automatic recognition of human activities from time-series sensor data (referred to as HAR) is a growing area of research in ubiquitous computing. Most recent research in the field adopts supervised deep learning paradigms to automate extraction of intrinsic features from raw signal inputs and addresses HAR as a multi-class classification problem where detecting a single activity class within the duration of a sensory data segment suffices. However, due to the innate diversity of human activities and their corresponding duration, no data segment is guaranteed to contain sensor recordings of a single activity type. In this paper, we express HAR more naturally as a set prediction problem where the predictions are sets of ongoing activity elements with unfixed and unknown cardinality. For the first time, we address this problem by presenting a novel HAR approach that learns to output activity sets using deep neural networks. Moreover, motivated by the limited availability of annotated HAR datasets as well as the unfortunate immaturity of existing unsupervised systems, we complement our supervised set learning scheme with a prior unsupervised feature learning process that adopts convolutional auto-encoders to exploit unlabeled data. The empirical experiments on two widely adopted HAR datasets demonstrate the substantial improvement of our proposed methodology over the baseline models.

References

[1]
Antonucci Alessandro, Giorgio Corani, Denis Mauá, and Sandra Gabaglio. 2013. An ensemble of bayesian networks for multilabel classification. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 1220--1225.
[2]
Mohammad Abu Alsheikh, Ahmed Selim, Dusit Niyato, Linda Doyle, Shaowei Lin, and Hwee-Pink Tan. 2016. Deep activity recognition models with triaxial accelerometers. In Artificial Intelligence Applied to Assistive Technologies and Smart Environments.
[3]
Ling Bao and Stephen S. Intille. 2004. Activity recognition from user-annotated acceleration data. In Proceedings of the 2nd International Conference on Pervasive Computing, Alois Ferscha and Friedemann Mattern (Eds.). 1--17.
[4]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. Comput. Surveys 46, 3 (2014), 33:1--33:33.
[5]
Andreas Bulling, Jamie A. Ward, and Hans Gellersen. 2012. Multimodal recognition of reading activity in transit using body-worn sensors. ACM Transactions on Applied Perception 9, 1 (2012), 2:1--2:21.
[6]
Ricardo Chavarriaga, Hesam Sagha, Alberto Calatroni, Sundara Tejaswi Digumarti, Gerhard Tröster, José del R. Millán, and Daniel Roggen. 2013. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters 34, 15 (2013), 2033--2042.
[7]
Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11 (2010), 625--660.
[8]
Yuhong Guo and Suicheng Gu. 2011. Multi-label classification using conditional dependency networks. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 1300--1305.
[9]
Nils Y. Hammerla, Shane Halloran, and Thomas Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. 1533--1540.
[10]
Tâm Huynh and Bernt Schiele. 2005. Analyzing features for activity recognition. In Proceedings Conference on Smart Objects and Ambient Intelligence: Innovative Context-aware Services: Usages and Technologies. 159--163.
[11]
Eunju Kim, Sumi Helal, and Diane Cook. 2010. Human activity recognition and pattern discovery. IEEE Pervasive Computing 9, 1 (2010), 48--53.
[12]
Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter 12, 2 (2011), 74--82.
[13]
Oscar D Lara, Alfredo J Pérez, Miguel A Labrador, and José D Posada. 2012. Centinela: A human activity recognition system based on acceleration and vital sign data. Pervasive and Mobile Computing 8, 5 (2012), 717--729.
[14]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436--444.
[15]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016).
[16]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. (2017).
[17]
C. Randell and H. Muller. 2000. Context awareness by analysing accelerometer data. In Proceedings of the 4th International Symposium on Wearable Computers. 175--176.
[18]
Nishkam Ravi, Nikhil Dandekar, Preetham Mysore, and Michael L. Littman. 2005. Activity recognition from accelerometer data. In Proceedings of the 17th Conference on Innovative Applications of Artificial Intelligence. 1541--1546.
[19]
Seyed Hamid Rezatofighi, Anton Milan, Qinfeng Shi, Anthony R. Dick, and Ian D. Reid. 2018. Joint learning of set cardinality and state distribution. In AAAI. (to appear).
[20]
Roberto Luis Shinmoto Torres, Qinfeng Shi, Anton van den Hengel, and Damith C. Ranasinghe. 2017. A hierarchical model for recognizing alarming states in a batteryless sensor alarm intervention for preventing falls in older people. Pervasive Mob. Comput. 40, C (2017), 1--16.
[21]
Asanga Wickramasinghe, Damith C Ranasinghe, Christophe Fumeaux, Keith D Hill, and Renuka Visvanathan. 2017. Sequence learning with passive RFID sensors for real-time bed-egress recognition in older people. IEEE Journal of Biomedical and Health Informatics 21, 4 (2017), 917--929.
[22]
Jian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Conference on Artificial Intelligence. 3995--4001.
[23]
Rui Yao, Guosheng Lin, Qinfeng Shi, and Damith C Ranasinghe. 2018. Efficient dense labelling of human activity sequences from wearables using fully convolutional networks. Pattern Recognition 78 (2018), 252--266.
[24]
M. Zeng, L. T. Nguyen, B. Yu, O.J. Mengshoel, J. Zhu, P. Wu, and J. Zhang. 2014. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services. 197--205.
[25]
Mi Zhang and Alexander A. Sawchuk. 2012. Motion primitive-based human activity recognition using a bag-of-features approach. In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium. 631--640.

Cited By

View all
  • (2024)Towards Learning Discrete Representations via Self-Supervision for Wearables-Based Human Activity RecognitionSensors10.3390/s2404123824:4(1238)Online publication date: 15-Feb-2024
  • (2024)Robust Feature Representation Using Multi-Task Learning for Human Activity RecognitionSensors10.3390/s2402068124:2(681)Online publication date: 21-Jan-2024
  • (2024)An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory DataInformation10.3390/info1510059315:10(593)Online publication date: 29-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
MobiQuitous '18: Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services
November 2018
490 pages
ISBN:9781450360937
DOI:10.1145/3286978
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • EAI: The European Alliance for Innovation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 November 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Activity Recognition
  2. Deep Learning
  3. Time-series Data
  4. Wearable Sensors

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

MobiQuitous '18
MobiQuitous '18: Computing, Networking and Services
November 5 - 7, 2018
NY, New York, USA

Acceptance Rates

Overall Acceptance Rate 26 of 87 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)2
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Towards Learning Discrete Representations via Self-Supervision for Wearables-Based Human Activity RecognitionSensors10.3390/s2404123824:4(1238)Online publication date: 15-Feb-2024
  • (2024)Robust Feature Representation Using Multi-Task Learning for Human Activity RecognitionSensors10.3390/s2402068124:2(681)Online publication date: 21-Jan-2024
  • (2024)An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory DataInformation10.3390/info1510059315:10(593)Online publication date: 29-Sep-2024
  • (2024)BSTCA-HAR: Human Activity Recognition Model Based on Wearable Mobile SensorsApplied Sciences10.3390/app1416698114:16(6981)Online publication date: 9-Aug-2024
  • (2024)A Washing Machine is All You Need? On the Feasibility of Machine Data for Self-Supervised Human Activity Recognition2024 International Conference on Activity and Behavior Computing (ABC)10.1109/ABC61795.2024.10651688(1-10)Online publication date: 29-May-2024
  • (2024)Energy-aware human activity recognition for wearable devices: A comprehensive reviewPervasive and Mobile Computing10.1016/j.pmcj.2024.101976104(101976)Online publication date: Nov-2024
  • (2024)Novel Human Activity Recognition by graph engineered ensemble deep learning modelIFAC Journal of Systems and Control10.1016/j.ifacsc.2024.10025327(100253)Online publication date: Mar-2024
  • (2024)Augmentation of Human Activity Data: Convert, Generate, TransformPattern Recognition10.1007/978-3-031-78192-6_5(66-81)Online publication date: 4-Dec-2024
  • (2024)A Comprehensive Review of Deep Learning for Activity RecognitionActivity Recognition and Prediction for Smart IoT Environments10.1007/978-3-031-60027-2_4(67-95)Online publication date: 27-May-2024
  • (2023)Using a Hybrid Neural Network and a Regularized Extreme Learning Machine for Human Activity Recognition with Smartphone and SmartwatchSensors10.3390/s2306335423:6(3354)Online publication date: 22-Mar-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media