Abstract
Gestures can serve as external representations of abstract concepts which may be otherwise difficult to illustrate. Gestures often accompany verbal statement as an embodiment of mental models that augment the communication of ideas, concepts or envisioned shapes of products. A gesture is also an indicator of the subject and context of the issue under discussion. We argue that if gestures can be identified and formalized they can be used as a knowledge indexing and retrieval tool and can prove to be useful access point into unstructured digital video data. We present a methodology and a prototype, called I-Gesture that allows users to (1) define a vocabulary of gestures for a specific domain, (2) build a digital library of the gesture vocabulary, and (3) mark up entire video streams based on the predefined vocabulary for future search and retrieval of digital content from the archive. I-Gesture methodology and prototype are illustrated through scenarios where it can be utilized. The paper concludes with results of evaluation experiments with I-Gesture using a test bed of design-construction projects.
Similar content being viewed by others
Notes
Stanford University is processing the patent application. Current status—provisional patent.
References
Farin D, Haenselmann T, Kopf S, Kühne G, Effelsberg W (2003) Segmentation and classification of moving video objects. In: Furht B, Marques O (eds) Handbook of video databases. CRC, Boca Raton
Fruchter R, Yen S, (2000) RECALL in action. In: Proceedings of ASCE ICCCBE-VIII conference. Stanford, CA August, 2000
Fruchter R, Biswas P, Yin Y (2004) DiVAS: knowledge capture and re-use http://mediax.stanford.edu/projects/divas.html
Goldin-Meadow S (2003) Hearing gesture: how our hands help us think. Harvard University Press, Cambridge
Heiser J, Tversky B, Silverman M (2003) Sketches for and from collaboration. Visual and Spatial Reasoning in Design, Cambridge
Kuehne G (2002) Motion-based segmentation and classification of video objects. PhD Thesis, Univ. of Mannheim
Stevens R, Cherry G, Fournier J (2002) Video traces: rich media annotations for teaching and learning. CSCW Computer Supported Collaborative Learning Conference, Boulder, January 2002
Zhang TY, Suen CY (1984) A fast parallel algorithm for thinning digital patterns. In Communications of the ACM, vol 27, pp 236–239
Acknowledgments
I-Gesture is part of the DIVAS project sponsored by MediaX, KDDI, and the Project Based Learning Lab at Stanford University. The authors would like to thank Dr. D. Farin and S. Kopf from the Mannheim University, Germany, for the initial fruitful discussions and assistance in using the MOCA Library related to video processing algorithms.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Biswas, P., Fruchter, R. Using gestures to convey internal mental models and index multimedia content. AI & Soc 22, 155–168 (2007). https://doi.org/10.1007/s00146-007-0123-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-007-0123-4