skip to main content
10.1145/3410530.3414363acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Perception of interaction between hand and object

Published: 12 September 2020 Publication History

Abstract

Action knowledge graphs can play a central role in smart cities, smart homes, robot planning, and so on. This is since both of the subject and the object of the actions can add more meaningful information for the higher-level application than the action alone as a predicate. We built a system that generates the action knowledge graphs from video using deep learning. Especially, we propose an algorithm which perceives the interaction between hand and object by measuring the proximity between them with considering the direction of fingers. We showed that this approach achieves the performance of 83% in accuracy using the Stair Lab data.

References

[1]
Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, Yaser Sheikh, OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, IEEE Transactions on Pattern Analysis and Machine Intelligence, July, 2019.
[2]
CMU-Perceptual-Computing-Lab: OpenPose https://github.com/CMU-Perceptual-Computing-Lab/OpenPose, 2020.
[3]
H. Durrant-Whyte, T. Bailey, "Simultaneous localization and mapping: part I". IEEE Robotics & Automation Magazine. 13 (2): 99--110. CiteSeerX 10.1.1.135.9810. ISSN 1070-9932. 2006.
[4]
Jun Hatori, Yuta Kikuchi, Sosuke Kobayashi, Kuniyuki Takahashi, Yuta Tsuboi, Yuya Unno, Wilson Ko, Jethro Tan. Interactively picking real-world objects with unconstrained spoken language instructions, Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2018.
[5]
Tsuyoshi Okita, Sozo Inoue, Recognition of Multiple Overlapping Activities Using Compositional CNN-LSTM Model, Proceedings of the 2017 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers Adjunct, 2017.
[6]
Stair lab: A Large-Scale Video Dataset of Everyday Human Actions https://actions.stair.center/, 2020.
[7]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, Proceedings of the International Conference on Machine Learning, 2015.
[8]
xingyizhou: centernet(objects as points) https://github.com/xingyizhou/CenterNet, 2020.
[9]
Xingyi Zhou, Dequan Wang, Philipp Krähenbühl, Objects as Points, Computer Vision and Pattern Recognition, Apr 2019.
[10]
Yutaka Matsuo, Unsolved problem in AI and embodiments, Symbol grounding, Japanese AI Society Conference, 2016.
[11]
Michael Hardegger and Daniel Roggen and Gerhard Troster, 3D ActionSLAM: wearable person tracking in multi-floor environments, Personal and Ubiquitous Computing, January, 2015.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UbiComp/ISWC '20 Adjunct: Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers
September 2020
732 pages
ISBN:9781450380768
DOI:10.1145/3410530
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. centernet
  2. interaction
  3. knowledge graph
  4. object detection
  5. openpose
  6. pose estimation

Qualifiers

  • Research-article

Conference

UbiComp/ISWC '20

Acceptance Rates

Overall Acceptance Rate 764 of 2,912 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 103
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media