skip to main content
10.1145/2070481.2070505acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Multimodal segmentation of object manipulation sequences with product models

Published: 14 November 2011 Publication History

Abstract

In this paper we propose an approach for unsupervised segmentation of continuous object manipulation sequences into semantically differing subsequences. The proposed method estimates segment borders based on an integrated consideration of three modalities (tactile feedback, hand posture, audio) yielding robust and accurate results in a single pass. To this end, a Bayesian approach originally applied by Fearnhead to segment one-dimensional time series data -- is extended to allow an integrated segmentation of multi-modal sequences. We propose a joint product model which combines modality-specific likelihoods to model segments. Weight parameters control the influence of each modality within the joint model. We discuss the relevance of all modalities based on an evaluation of the temporal and structural correctness of segmentation results obtained from various weight combinations.

References

[1]
A. Barchunova, R. Haschke, U. Grossekathoefer, S. Wachsmuth, H. Janssen, and H. Ritter. Unsupervised segmentation of object manipulation operations from multimodal input. In B. Hammer and T. Villmann, editors, New Challenges in Neural Computation, Machine Learning Reports. 2011.
[2]
Immersion CyberGlove II. http://www.cyberglovesystems.com/products/cyberglove-ii/overview.
[3]
P. Fearnhead. Exact bayesian curve fitting and signal segmentation. Signal Processing, 53, 2005.
[4]
P. Fearnhead. Exact and efficient Bayesian inference for multiple changepoint problems. Statistics and Computing, 2006.
[5]
H. Kawasaki, K. Nakayama, and G. Parker. Teaching for multi-fingered robots based on motion intention in virtual reality. In IECON, 2000.
[6]
C. Li, P. Kulkarni, and B. Prabhakaran. Motion Stream Segmentation and Recognition by Classification. In ICASSP. IEEE, 2006.
[7]
K. Matsuo, K. Murakami, T. Hasegawa, K. Tahara, and K. Ryo. Segmentation method of human manipulation task based on measurement of force imposed by a human hand on a grasped object. In IROS. IEEE, 2009.
[8]
G. Ogris, T. Stiefmeier, P. Lukowicz, and G. Troster. Using a complex multi-modal on-body sensor system for activity spotting. Wearable Computers, IEEE International Symposium, 2008.
[9]
J. A. Ward, P. Lukowicz, G. Troster, and T. E. Starner. Activity recognition of assembly tasks using body-worn microphones and accelerometers. TPAMI, 2006.
[10]
X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In ICML. ACM, 2007.
[11]
R. Zöllner, T. Asfour, and R. Dillmann. Programming by demonstration: Dual-arm manipulation tasks for humanoid robots. In IROS. IEEE, 2004.

Cited By

View all
  • (2011)Learning of object manipulation operations from continuous multimodal input2011 11th IEEE-RAS International Conference on Humanoid Robots10.1109/Humanoids.2011.6100880(507-512)Online publication date: Oct-2011

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '11: Proceedings of the 13th international conference on multimodal interfaces
November 2011
432 pages
ISBN:9781450306416
DOI:10.1145/2070481
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 November 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. manual object manipulation
  2. multimodality
  3. perception

Qualifiers

  • Poster

Conference

ICMI'11
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2011)Learning of object manipulation operations from continuous multimodal input2011 11th IEEE-RAS International Conference on Humanoid Robots10.1109/Humanoids.2011.6100880(507-512)Online publication date: Oct-2011

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media