skip to main content
10.1145/3594739.3610753acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Multimodal Sensor Data Fusion and Ensemble Modeling for Human Locomotion Activity Recognition

Published: 08 October 2023 Publication History

Abstract

The primary research objective of this study is to develop an algorithm pipeline for recognizing human locomotion activities using multimodal sensor data from smartphones, while minimizing prediction errors due to data differences between individuals. The multimodal sensor data provided for the 2023 SHL recognition challenge comprises three types of motion data and two types of radio sensor data. Our team, ‘HELP,’ presents an approach that aligns all the multimodal data to derive a form of vector composed of 106 features, and then blends predictions from multiple learning models which are trained using different number of feature vectors. The proposed neural network models, trained solely on data from a specific individual, yield F1 scores of up to 0.8 in recognizing the locomotion activities of other users. Through post-processing operations, including the ensemble of multiple learning models, it is expected to achieve a performance improvement of 10% or greater in terms of F1 score.

References

[1]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR) 46, 3 (2014), 1–33. https://doi.org/10.1145/2499621
[2]
Seungeun Chung, Chi Yoon Jeong, Jeong Mook Lim, Jiyoun Lim, Kyoung Ju Noh, Gague Kim, and Hyuntae Jeong. 2022. Real-world multimodal lifelog dataset for human behavior study. ETRI Journal 44, 3 (2022), 426–437. https://doi.org/10.4218/etrij.2020-0446
[3]
Seungeun Chung, Jiyoun Lim, Kyoung Ju Noh, Gague Kim, and Hyuntae Jeong. 2019. Sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning. Sensors 19, 7 (2019), 1716. https://doi.org/10.3390/s19071716
[4]
Muhammad Ehatisham-Ul-Haq, Muhammad Awais Azam, Yasar Amin, and Usman Naeem. [n. d.]. C2FHAR: Coarse-to-fine human activity recognition with behavioral context modeling using smart inertial sensors. IEEE Access 8 ([n. d.]), 7731–7747. https://doi.org/10.1109/access.2020.2964237
[5]
Davide Figo, Pedro C Diniz, Diogo R Ferreira, and Joao MP Cardoso. 2010. Preprocessing techniques for context recognition from accelerometer data. Personal and Ubiquitous Computing 14 (2010), 645–662. https://doi.org/10.1007/s00779-010-0293-9
[6]
Wen Ge and Emmanuel Agu. 2020. CRUFT: Context recognition under uncertainty using fusion and temporal learning. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 747–752. https://doi.org/10.1109/icmla51294.2020.00122
[7]
Hristijan Gjoreski, Mathias Ciliberto, Lin Wang, Francisco Javier Ordonez Morales, Sami Mekki, Stefan Valentin, and Daniel Roggen. 2018. The university of sussex-huawei locomotion and transportation dataset for multimodal analytics with mobile devices. IEEE Access 6 (2018), 42592–42604. https://doi.org/10.1109/access.2018.2858933
[8]
Sebastien Richoz, Lin Wang, Philip Birch, and Daniel Roggen. 2020. Transportation mode recognition fusing wearable motion, sound, and vision sensors. IEEE Sensors Journal 20, 16 (2020), 9314–9328. https://doi.org/10.1109/jsen.2020.2987306
[9]
Yonatan Vaizman, Katherine Ellis, and Gert Lanckriet. [n. d.]. Recognizing detailed human context in the wild from smartphones and smartwatches. IEEE pervasive computing 16, 4 ([n. d.]), 62–74. https://doi.org/10.1109/mprv.2017.3971131
[10]
Lin Wang, Mathias Ciliberto, Hristijan Gjoreski, Paula Lago, Kazuya Murao, Tsuyoshi Okita, and Daniel Roggen. 2021. Locomotion and transportation Mode Recognition from GPS and radio signals: Summary of SHL Challenge 2021. In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers. 412–422. https://doi.org/10.1145/3460418.3479373
[11]
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Paula Lago, Kazuya Murao, Tsuyoshi Okita, and Daniel Roggen. 2021. Three-year review of the 2018–2020 SHL challenge on transportation and locomotion mode recognition from mobile sensors. Frontiers in Computer Science 3 (2021), 713719. https://doi.org/10.3389/fcomp.2021.713719
[12]
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Paula Lago, Kazuya Murao, Tsuyoshi Okita, and Daniel Roggen. 2023. Summary of SHL challenge 2023: Recognizing locomotion and transportation mode from GPS and motion sensors. In Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2023 ACM International Symposium on Wearable Computers.
[13]
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Sami Mekki, Stefan Valentin, and Daniel Roggen. 2018. Benchmarking the SHL recognition challenge with classical and deep-learning pipelines. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 1626–1635. https://doi.org/10.1145/3267305.3267531
[14]
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Sami Mekki, Stefan Valentin, and Daniel Roggen. 2019. Enabling reproducible research in sensor-based transportation mode recognition with the Sussex-Huawei dataset. IEEE Access 7 (2019), 10870–10891. https://doi.org/10.1109/access.2019.2890793

Cited By

View all
  • (2024)A Hybrid Algorithmic Pipeline for Robust Recognition of Human Locomotion: Addressing Missing Sensor ModalitiesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678462(591-596)Online publication date: 5-Oct-2024
  • (2024)Interpolation attention-based KAN for the Sussex-Huawei Locomotion-Transportation Recognition ChallengeCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678460(580-584)Online publication date: 5-Oct-2024
  • (2024)Sussex-Huawei Locomotion Recognition Using Machine Learning and Deep Learning with Multi-sensor dataCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678457(563-568)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. Multimodal Sensor Data Fusion and Ensemble Modeling for Human Locomotion Activity Recognition

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing
    October 2023
    822 pages
    ISBN:9798400702006
    DOI:10.1145/3594739
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Activity recognition
    2. Human locomotion
    3. Machine learning
    4. Multimodal sensors
    5. Neural networks
    6. SHL Dataset

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Korean government

    Conference

    UbiComp/ISWC '23

    Acceptance Rates

    Overall Acceptance Rate 764 of 2,912 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)95
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Hybrid Algorithmic Pipeline for Robust Recognition of Human Locomotion: Addressing Missing Sensor ModalitiesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678462(591-596)Online publication date: 5-Oct-2024
    • (2024)Interpolation attention-based KAN for the Sussex-Huawei Locomotion-Transportation Recognition ChallengeCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678460(580-584)Online publication date: 5-Oct-2024
    • (2024)Sussex-Huawei Locomotion Recognition Using Machine Learning and Deep Learning with Multi-sensor dataCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678457(563-568)Online publication date: 5-Oct-2024
    • (2023)Summary of SHL Challenge 2023: Recognizing Locomotion and Transportation Mode from GPS and Motion SensorsAdjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing10.1145/3594739.3610758(575-585)Online publication date: 8-Oct-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media