skip to main content
10.1145/3341162.3345594acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

A dialogue-based annotation for activity recognition

Published: 09 September 2019 Publication History

Abstract

This paper presents a method to collect training labels for human activity recognition by using a dialogue system. To show the feasibility of using dialogue-based annotation, we implemented the dialogue system and conducted experiments in the lab setting. The preliminary performance of activity recognition attained the f-measure of 0.76. We also analyze the collected data to provide a better understanding of what users expect from the system, how they interact with it and its other potential uses. Finally, we discussed the results obtained and possible directions for future works.

References

[1]
Omar Alonso, Jannik Strötgen, Ricardo A Baeza-Yates, and Michael Gertz. 2011. Temporal Information Retrieval: Challenges and Opportunities. Twaw 11 (2011), 1--8.
[2]
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision. 2425--2433.
[3]
Leo Breiman. 2017. Classification and regression trees. Routledge.
[4]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR) 46, 3 (2014), 33.
[5]
Angel X Chang and Christopher D Manning. 2012. Sutime: A library for recognizing and normalizing time expressions. In Lrec, Vol. 2012. 3735--3740.
[6]
Google. 2010. Dialogflow. https://dialogflow.com/
[7]
Google. 2016. Google Assistant. https://assistant.google.com/
[8]
Sozo Inoue, Naonori Ueda, Yasunobu Nohara, and Naoki Nakashima. 2015. Mobile activity recognition for a whole day: recognizing real nursing activities with big dataset. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 1269--1280.
[9]
Stephen S Intille, Kent Larson, and Chuck Kukla. 2002. Just-in-time context-sensitive questioning for preventative health care. In Proceedings of the AAAI 2002 Workshop on Automation as Caregiver: The Role of Intelligent Technology in Elder Care. AAAI Press Menlo Park, CA.
[10]
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016).
[11]
Dan Jurafsky and James H Martin. 2014. Speech and language processing. Vol. 3. Pearson London.
[12]
Tina Klüwer. 2011. From chatbots to dialog systems. In Conversational agents and natural language interaction: Techniques and Effective Practices. IGI Global, 1--22.
[13]
Nattaya Mairittha, Tittaya Mairittha, and Sozo Inoue. 2018. A Mobile App for Nursing Activity Recognition. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. ACM, 400--403.
[14]
Ekaterina H Spriggs, Fernando De La Torre, and Martial Hebert. 2009. Temporal segmentation and activity classification from first-person sensing. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 17--24.
[15]
Jannik Strötgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, 321--324.
[16]
TLM Van Kasteren, Gwenn Englebienne, and Ben JA Kröse. 2010. Activity recognition using semi-markov models on real world smart home datasets. Journal of ambient intelligence and smart environments 2, 3 (2010), 311--325.
[17]
Tim Van Kasteren, Athanasios Noulas, Gwenn Englebienne, and Ben Kröse. 2008. Accurate activity recognition in a home setting. In Proceedings of the 10th international conference on Ubiquitous computing. ACM, 1--9.
[18]
Jianxin Wu, Adebola Osuntogun, Tanzeem Choudhury, Matthai Philipose, and James M Rehg. 2007. A scalable approach to activity recognition based on object use. In 2007 IEEE 11th international conference on computer vision. IEEE, 1--8.

Cited By

View all
  • (2023)Generalisable Dialogue-based Approach for Active Learning of Activities of Daily LivingACM Transactions on Interactive Intelligent Systems10.1145/361601713:3(1-37)Online publication date: 11-Sep-2023
  • (2022)How do Conversational Agents Transform Qualitative Interviews? Exploration and Support of Researchers’ Needs in Interviews at ScaleCompanion Proceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490100.3516478(124-128)Online publication date: 22-Mar-2022
  • (2022)A Dialogue-Based Interface for Active Learning of Activities of Daily LivingProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511130(820-831)Online publication date: 22-Mar-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UbiComp/ISWC '19 Adjunct: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers
September 2019
1234 pages
ISBN:9781450368698
DOI:10.1145/3341162
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. activity recognition
  2. data collection
  3. dialogue system

Qualifiers

  • Research-article

Conference

UbiComp '19

Acceptance Rates

Overall Acceptance Rate 764 of 2,912 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Generalisable Dialogue-based Approach for Active Learning of Activities of Daily LivingACM Transactions on Interactive Intelligent Systems10.1145/361601713:3(1-37)Online publication date: 11-Sep-2023
  • (2022)How do Conversational Agents Transform Qualitative Interviews? Exploration and Support of Researchers’ Needs in Interviews at ScaleCompanion Proceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490100.3516478(124-128)Online publication date: 22-Mar-2022
  • (2022)A Dialogue-Based Interface for Active Learning of Activities of Daily LivingProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511130(820-831)Online publication date: 22-Mar-2022
  • (2021)Experience Sampling Tool for Repetitive Skills Training in Sports using Voice User InterfaceAdjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers10.1145/3460418.3479283(54-55)Online publication date: 21-Sep-2021
  • (2020)Automatic Labeled Dialogue Generation for Nursing Record SystemsJournal of Personalized Medicine10.3390/jpm1003006210:3(62)Online publication date: 16-Jul-2020
  • (2019)Incremental Learning to Personalize Human Activity Recognition Models: The Importance of Human AI CollaborationSensors10.3390/s1923515119:23(5151)Online publication date: 25-Nov-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media