Abstract
3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only partial view of the problem to the device space at a time, and results in tedious and unnatural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates, copies and deletes the instances of the primitives. These basic modeling processes are directly performed in the task space, using hand shapes and motions. Hand shapes are recognized as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion influences the motions of other objects in scene. The involvement of hand creates a diversity of dynamic situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.
Similar content being viewed by others
References
Sturman D, Zeltzer D, Pieper S. Hands-on interaction with virtual environments. InProc. ULST’89: ACM SIGGRAPH/SIGCHI Symp. on User Interface Software and Technology, 1989, pp. 19–24.
Green M, Shaw C. The Datapaper: Living in the virtual world. InProc. of Graphics Interface’90, 1990, pp. 123–130.
Liang J, Shaw C, Green M. On temporal-spatial realism in the virtual reality environment. InProc. UIST’91: ACM Symp. on User Interface Software and Technology, 1991, pp. 19–25.
Sturman D. Whole-Hand Input. Doctoral Dissertation, Media Lab, MIT, Cambridge, Feb. 1992.
Sturman D, Zeltzer D. A survey of glove-based input.IEEE Computer Graphics and Applications, 1994, 14(1): 30–39.
Green M. Minimal Reality Toolkit: Programmer’s Manual. MR Manual, Department of Computing Science, University of Alberta, May 1993.
Shneiderman B. Direct manipulation: A step beyond programming languages.IEEE Computer, 1983, 16(8): 57–69.
Hutchins E, Hollan J, Norman D. Direct manipulation interfaces. InUser Centered System Design, Norman D, Draper D (eds.), Lawrence Erlbaum Associates, Inc., NJ, 1986, pp. 87–124.
Takahashi T, Kishino F. Hand gesture coding based on experiments using a hand gesture interface devices.Sigchi Bulletin, 1991, 23(2): 67–74.
Baudel T, Beaudoin-Lafon M. Charade: Remote control of objects using free-hand gestures.Comm. ACM, 1993, 36(7): 28–35.
Papper M, Gigante M. Using gestures to control a virtual arm. InVirtual Reality Systems, Earnshaw R, Jones H, Gigante M (eds.), Academic Press, London, 1993.
Shoemake K. Animating rotation with quaternion curves. InSIGGRAPH’85, 1985, 19(3): 245–254.
Reeves W T. Inbetweening for computer animation utilizing moving point constraints.Computer Graphics, 1981, 15(3): 263–269.
Badler N, Manoochehri K, Baraff D. Multi-dimensional input techniques and articulated figure positioning by multiple constraints. InProc. of Workshop on Interactive 3D Graphics, 1986, pp. 151–169.
Reynolds C W. Computer animation with scripts and actors.Computer Graphics, 1982, 16(3): 289–296.
Thalmann D, Magnenat-Thalmann N. CINEMIRA: A 3-D computer animation language based on actor and camera data types. Technical Report, University of Montreal, 1984.
Sun H. Hand-guided Scene Modeling. book chapter publication ofVirtual Reality Applications, 1994.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sun, H. Hand interface in traditional modeling and animation tasks. J. of Comput. Sci. & Technol. 11, 286–295 (1996). https://doi.org/10.1007/BF02943135
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF02943135