skip to main content
10.1145/3606041.3618059acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Functas Usability for Human Activity Recognition using Wearable Sensor Data

Published:01 November 2023Publication History

ABSTRACT

Recent advancements in data science have introduced implicit neural representations as a powerful approach for learning complex, high-dimensional functions, bypassing the need for explicit equations or manual feature engineering. In this paper, we present our research on employing the weights of these implicit neural representations to characterize and classify batches of data, referred to as 'functas.' This approach eliminates the need for manual feature engineering on raw data. Specifically, we showcase the efficacy of the 'functas' method in the domain of human activity recognition, utilizing output data from sensors such as accelerometers and gyroscopes. Our results demonstrate the promising potential of the 'functas' approach, suggesting a potential shift in the paradigm of data science methodologies.

References

  1. Matan Atzmon and Yaron Lipman. "Sal: Sign agnostic learning of shapes from raw data". In: Proc. CVPR. 2020.Google ScholarGoogle ScholarCross RefCross Ref
  2. Rohan Chabra et al. "Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction". In: arXiv preprint arXiv:2003.10983 (2020).Google ScholarGoogle Scholar
  3. Sravan Challa, Akhilesh Kumar, and Vijay Semwal. "A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data". In: The Visual Computer 38 (Aug. 2021). doi: 10.1007/s00371-021-02283--3.Google ScholarGoogle ScholarCross RefCross Ref
  4. Hao Chen et al. NeRV: Neural Representations for Videos. 2021. arXiv: 2110.13903 [cs.CV].Google ScholarGoogle Scholar
  5. Tianqi Chen and Carlos Guestrin. "XGBoost". In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Aug. 2016. doi: 10.1145/2939672.2939785. url: https://doi.org/10.1145% 2F2939672.2939785.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Zeyuan Chen et al. VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution. 2022. arXiv: 2206.04647 [eess.IV].Google ScholarGoogle Scholar
  7. Emilien Dupont et al. From data to functa: Your data point is a function and you can treat it like one. 2022. arXiv: 2201.12204 [cs.LG].Google ScholarGoogle Scholar
  8. Brandon Yushan Feng, Susmija Jabbireddy, and Amitabh Varshney. "VIINTER: View Interpolation with Implicit Neural Representations of Images". In: SIGGRAPH Asia 2022 Conference Papers. ACM, Nov. 2022. doi: 10.1145/3550469. 3555417. url: https://doi.org/10.1145%2F3550469.3555417.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Kyle Genova et al. "Deep Structured Implicit Functions". In: arXiv preprint arXiv:1912.06126 (2019).Google ScholarGoogle Scholar
  10. Kyle Genova et al. "Learning shape templates with structured implicit functions". In: Proc. ICCV. 2019, pp. 7154--7164.Google ScholarGoogle Scholar
  11. Amos Gropp et al. "Implicit geometric regularization for learning shapes". In: arXiv preprint arXiv:2002.10099 (2020).Google ScholarGoogle Scholar
  12. Chiyu Jiang et al. "Local implicit grid representations for 3d scenes". In: Proc. CVPR. 2020, pp. 6001--6010.Google ScholarGoogle Scholar
  13. Reyes-Ortiz Jorge et al. Human Activity Recognition Using Smartphones. UCI Machine Learning Repository. 2012.Google ScholarGoogle Scholar
  14. Dafne van Kuppevelt, Vincent van Hees, and Christiaan Meijer. PAMAP2 dataset preprocessed v0.3.0. Zenodo, July 2017. doi: 10.5281/zenodo.834467. url: https://doi.org/10. 5281/zenodo.834467.Google ScholarGoogle ScholarCross RefCross Ref
  15. Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. "Activity Recognition Using Cell Phone Accelerometers". In: SIGKDD Explor. Newsl. 12.2 (Mar. 2011), pp. 74--82. issn: 1931- 0145. doi: 10.1145/1964897.1964918. url: https://doi.org/10. 1145/1964897.1964918.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Luca A. Lanzendörfer and Roger Wattenhofer. Siamese SIREN: Audio Compression with Implicit Neural Representations. 2023. arXiv: 2306.12957 [cs.SD].Google ScholarGoogle Scholar
  17. Mateusz Michalkiewicz et al. "Implicit surface representations as layers in neural networks". In: Proc. ICCV. 2019, pp. 4743--4752.Google ScholarGoogle Scholar
  18. Jeong Joon Park et al. "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation". In: Proc. CVPR (2019).Google ScholarGoogle Scholar
  19. Songyou Peng et al. "Convolutional occupancy networks". In: arXiv preprint arXiv:2003.04618 (2020).Google ScholarGoogle Scholar
  20. Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. "Scene Representation Networks: Continuous 3D-StructureAware Neural Scene Representations". In: Proc. NeurIPS. 2019.Google ScholarGoogle Scholar
  21. Vincent Sitzmann et al. Implicit Neural Representations with Periodic Activation Functions. 2020. arXiv: 2006.09661 [cs.CV].Google ScholarGoogle Scholar
  22. Yannick Strümpler et al. Implicit Neural Representations for Image Compression. 2022. arXiv: 2112.04267 [eess.IV].Google ScholarGoogle Scholar
  23. Filip Szatkowski et al. Hypernetworks build Implicit Neural Representations of Sounds. 2023. arXiv: 2302.04959 [cs.LG].Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Filip Szatkowski et al. HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks. 2022. arXiv: 2211.01839 [cs.SD].Google ScholarGoogle Scholar
  25. Dejia Xu et al. Signal Processing for Implicit Neural Representations. 2022. arXiv: 2210.08772 [cs.CV]Google ScholarGoogle Scholar

Index Terms

  1. Functas Usability for Human Activity Recognition using Wearable Sensor Data

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        HCMA '23: Proceedings of the 4th International Workshop on Human-centric Multimedia Analysis
        November 2023
        56 pages
        ISBN:9798400702723
        DOI:10.1145/3606041
        • Program Chairs:
        • Jingkuan Song,
        • Wu Liu

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 November 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate12of21submissions,57%

        Upcoming Conference

        MM '24
        MM '24: The 32nd ACM International Conference on Multimedia
        October 28 - November 1, 2024
        Melbourne , VIC , Australia
      • Article Metrics

        • Downloads (Last 12 months)49
        • Downloads (Last 6 weeks)3

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader