Skip to main content
Log in

Using gestures to convey internal mental models and index multimedia content

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Gestures can serve as external representations of abstract concepts which may be otherwise difficult to illustrate. Gestures often accompany verbal statement as an embodiment of mental models that augment the communication of ideas, concepts or envisioned shapes of products. A gesture is also an indicator of the subject and context of the issue under discussion. We argue that if gestures can be identified and formalized they can be used as a knowledge indexing and retrieval tool and can prove to be useful access point into unstructured digital video data. We present a methodology and a prototype, called I-Gesture that allows users to (1) define a vocabulary of gestures for a specific domain, (2) build a digital library of the gesture vocabulary, and (3) mark up entire video streams based on the predefined vocabulary for future search and retrieval of digital content from the archive. I-Gesture methodology and prototype are illustrated through scenarios where it can be utilized. The paper concludes with results of evaluation experiments with I-Gesture using a test bed of design-construction projects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Stanford University is processing the patent application. Current status—provisional patent.

References

  • Farin D, Haenselmann T, Kopf S, Kühne G, Effelsberg W (2003) Segmentation and classification of moving video objects. In: Furht B, Marques O (eds) Handbook of video databases. CRC, Boca Raton

    Google Scholar 

  • Fruchter R, Yen S, (2000) RECALL in action. In: Proceedings of ASCE ICCCBE-VIII conference. Stanford, CA August, 2000

  • Fruchter R, Biswas P, Yin Y (2004) DiVAS: knowledge capture and re-use http://mediax.stanford.edu/projects/divas.html

  • Goldin-Meadow S (2003) Hearing gesture: how our hands help us think. Harvard University Press, Cambridge

    Google Scholar 

  • Heiser J, Tversky B, Silverman M (2003) Sketches for and from collaboration. Visual and Spatial Reasoning in Design, Cambridge

  • Kuehne G (2002) Motion-based segmentation and classification of video objects. PhD Thesis, Univ. of Mannheim

  • Stevens R, Cherry G, Fournier J (2002) Video traces: rich media annotations for teaching and learning. CSCW Computer Supported Collaborative Learning Conference, Boulder, January 2002

  • Zhang TY, Suen CY (1984) A fast parallel algorithm for thinning digital patterns. In Communications of the ACM, vol 27, pp 236–239

Download references

Acknowledgments

I-Gesture is part of the DIVAS project sponsored by MediaX, KDDI, and the Project Based Learning Lab at Stanford University. The authors would like to thank Dr. D. Farin and S. Kopf from the Mannheim University, Germany, for the initial fruitful discussions and assistance in using the MOCA Library related to video processing algorithms.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renate Fruchter.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Biswas, P., Fruchter, R. Using gestures to convey internal mental models and index multimedia content. AI & Soc 22, 155–168 (2007). https://doi.org/10.1007/s00146-007-0123-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-007-0123-4

Keywords

Navigation