“Finger-Pointer”: Pointing interface by image processing
References (17)
Louder than words
(1987)- et al.
Human reader: An advanced man-machine interface based on human images and speech
Trans. IEICE
(1987) - et al.
A real time head motion detection system
An application of optical flow—Extraction of facial expression
Put-that-there: Voice and gesture at the graphics interface
ACM SIGGRAPH
(1980)- et al.
A synthetic visual environment with hand gestures and voice input
Recognition of Japanese manual alphabet using thinning image
IEICE Tech. Rep.
(1993)- et al.
A gesture interface for 3D shape manipulation
Cited by (49)
Relative Pointing Interface: A gesture interaction method based on the ability to divide space
2020, International Journal of Industrial ErgonomicsCitation Excerpt :Various methods to project the position of the hand onto the screen have been proposed. In one study (Fukumoto et al., 1994), the direction of the pointing gesture was recognized using a virtual reference point in space called Virtual Projection Origin (VPO). The cursor was projected at the point of intersection between the screen and the straight line that connects the VPO to the fingertip.
A real-time vision-based hand gesture interaction system for virtual EAST
2016, Fusion Engineering and DesignCitation Excerpt :Rautaray Siddharth.S [4] proposed a hand gesture interaction system under dynamic environment in 2011, which use image processing technology to complete the detection, segmentation, tracking and recognition of hand, and convert the hand gesture to a significant computer command. Based on the shape feature of hand or fingers, researchers [5,6] have proposed some methods to estimate the point direction of finger or multi-finger. Another method [7–9] uses 3D hand model to analysis gesture with one or more camera to capture image.
Physical Browsing and Selection-Easy Interaction with Ambient Services
2010, Human-Centric Interfaces for Ambient IntelligenceA multidimensional dynamic time warping algorithm for efficient multimodal fusion of asynchronous data streams
2009, NeurocomputingCitation Excerpt :In contrast to early fusion architectures, mutual information coming from another modality is not considered during the recognition of a single mode, which causes late fusion to perform worse than early fusion if the modalities are correlated like in bimodal emotion recognition [21] or when using lip-reading for enhanced speech recognition. Multimodal fusion at the semantic level has been applied in systems like Bolt's “Put-that-there” [10], ShopTalk [15], finger-pointer [19], and others like in [36,70,62]. Hybrid systems for multimodal integration are an attempt to combine the advantages of late and early fusion.
Multimodal Interfaces. Combining Interfaces to Accomplish a Single Task
2008, HCI Beyond the GUI