Elsevier

Computer-Aided Design

Volume 46, January 2014, Pages 239-245
Computer-Aided Design

Technical note
GaFinC: Gaze and Finger Control interface for 3D model manipulation in CAD application

https://doi.org/10.1016/j.cad.2013.08.039Get rights and content

Highlights

  • A multi-modal control method using finger and gaze for 3D manipulation is proposed.

  • Independent gaze pointing interface increases the intuitiveness of the zooming task.

  • The performance of GaFinC is applicable to actual CAD tools.

  • Interviews of user experience report higher intuitiveness than a mouse.

Abstract

Natural and intuitive interfaces for CAD modeling such as hand gesture controls have received a lot of attention recently. However, in spite of its high intuitiveness and familiarity, their use for actual applications has been found to be less comfortable than a conventional mouse interface because of user physical fatigue over long periods of operation. In this paper, we propose an improved gesture control interface for 3D modeling manipulation tasks that possesses conventional interface level usability with low user fatigue while maintaining a high level of intuitiveness. By analyzing problems associated with previous hand gesture controls in translation, rotation and zooming, we developed a multi-modal control interface GaFinC: Gaze and Finger Control interface. GaFinC can track precise hand positions, recognizes several finger gestures, and utilizes an independent gaze pointing interface for setting the point of interest. To verify the performance of GaFinC, tests of manipulation accuracy and time are conducted and their results are compared with those of a conventional mouse. The comfort and intuitiveness level are also scored by means of user interviews. As a result, although the GaFinC interface posted insufficient performance in accuracy and times compared with a mouse, it shows applicable level performance. Also users found it to be more intuitive than a mouse interface while maintaining a usable level of comfort.

Introduction

Recently much research into HCI: human–computer interaction has been conducted and its developments for better interfaces have been directed towards making HCI more natural and intuitive. Also in the CAD field, many HCI interfaces have been actively developed, of which the hand gesture control interface for CAD modeling tasks is one. There have been studies using wearable hardware, sensing hand coordinates and gestures  [1], [2]. But as special wearable hardware was an obstacle to becoming popular, vision based gesture tracking control  [3], [4], [5] became one of the main topics. Recently, hand tracking and skeleton recognition studies using depth sensing cameras such as ‘Kinect’ have been introduced  [6], [7], [8], [9]. These hand gesture control interfaces are generally designed based on the metaphor of the real world and its intuitiveness makes it user friendly. However, in spite of its intuitiveness and familiarity, its usability for actual 3D CAD modeling applications is not that comfortable compared with a conventional interface such as a mouse. Most importantly, previous hand gesture control interfaces were not applicable for long time operation in the aspect of increasing users’ physical fatigue. In this paper, we suggest an improved gesture control interface showing conventional interface level usability with low fatigue while maintaining a high level of intuitiveness. As an order of priority, we focused on 3D model manipulation tasks, which are among the most frequent in conducting 3D CAD modeling. Through analyzing problems of previous hand gesture control in manipulation tasks, we achieved our approaches as follows.

  • (1)

    Precise hand and finger tracking control.

  • (2)

    Easy application control with finger gestures.

  • (3)

    Gaze tracking as an independent pointing interface.

To achieve these, we developed the multi-modal hand gesture control interface ‘GaFinC’: Gaze and Finger Control interface. The problems in the analysis of previous works and our approaches will be mentioned in detail in the next section.

Section snippets

Background

In most CAD applications, manipulation tasks can be divided into three tasks, translation, rotation and zooming. So we will extract the problems of previous research separately for each manipulation task. The definition of terms is shown in Fig. 1.

The translation task is a manipulation method of a 3D model by a parallel translation in the XY plane to reveal hidden information outside the camera field of view. In general CAD applications, translation is done by dragging a mouse in the XY plane

Our approach

Through reviewing previous research, we summarize the problems of previous hand gesture control as in Fig. 2. To solve these problems, we set our approach as follows.

  • (1)

    Minimizing floating body and hand movement.

  • (2)

    Changing the manipulation task only with hand gestures.

  • (3)

    An additional independent pointing interface having simple position error feedback.

To satisfy this approach, we developed the multi-modal 3D model manipulation interface ‘GaFinC’: Gaze and Finger Control. The GaFinC interface is

Manipulation gesture design

In this section, gestures for three manipulation tasks are designed. The basic premise of gesture design is reflecting user behavior in the real-world while minimizing physical fatigue. Designed hand gestures and their description are shown in Fig. 3, Fig. 4.

At first, the hand gestures for the neutral state, which means idle, is designed as an all hands opened status.

The translation task is designed to be controlled by one hand. When users try to move something in the real-world, first they

System implementation

The GaFinC interface consists of three parts: the gaze tracker, the hand and finger gesture recognizer, and the data integration center. Using the GaFinC interface, the user moves their hands and fingers and changes the gaze point. These control data are recognized by the finger gesture recognizer and the gaze tracker. The recognized data is transmitted to the data integration center, where it is converted to the proper format for the target applications. The overview of the GaFinC interface is

User test

To verify the performance of the GaFinC interface, two kinds of test were conducted: fast information finding and accurate manipulation tests (see Fig. 9). In total eight males ranging from 26 to 33 years of age, all familiar with CAD applications, took part in the test, with each participant practicing using the GaFinC interface for at least 5 min to become accustomed to it. ‘Solidworks’  [19] was used as the CAD application. For the fast information finding test, users were asked to find

Conclusions

We proposed a multi-modal interface GaFinC: Gaze and Finger gesture Control for 3D model manipulation tasks. It contains a precise hand tracking and finger gesture recognition interface and an independent gaze tracker for setting the point of interest. In tests to verify the performance, the GaFinC interface demonstrated insufficient performance in accuracy and time compared to the mouse. Although the GaFinC scored better in overall intuitiveness in user interviews, it still needs to be

References (20)

  • Kumar P, Verma J, Prasad S. Hand data glove: a wearable real-time device for human–computer interaction, Hand...
  • Kim D, Hilliges O, Izadi S, Butler AD, Chen J, Oikonomidis I, Olivier P. Digits: freehand 3D interactions anywhere...
  • J.P. Wachs et al.

    Vision-based hand-gesture applications

    Communications of the ACM

    (2011)
  • Du W, Li H. Vision based gesture recognition system with single camera. In: 5th international conference on signal...
  • Wang R, Paris S, Popović J. 6D hands: markerless hand-tracking for computer aided design. In: Proceedings of the 24th...
  • S. Murugappan et al.

    Shape-it-up: hand gesture based creative expression of 3D shapes using intelligent generalized cylinders

    Computer-Aided Design

    (2012)
  • M. Fiorentino et al.

    Design review of CAD assemblies using bimanual natural interface

    International Journal on Interactive Design and Manufacturing (IJIDeM)

    (2012)
  • Cho S, Heo Y, Bang H. Turn: a virtual pottery by real spinning wheel. In: ACM SIGGRAPH 2012 posters, SIGGRAPH’12, 2012,...
  • Dave D, Chowriappa A, Kesavadas T. Gesture interface for 3D CAD modeling using...
  • M. Fiorentino et al.

    Augmented technical drawings: a novel technique for natural interactive visualization of computer-aided design models

    Transactions of the ASME-S-Journal of Computing and Information Science in Engineering

    (2012)
There are more references available in the full text version of this article.

Cited by (43)

  • Building EEG-based CAD object selection intention discrimination model using convolutional neural network (CNN)

    2022, Advanced Engineering Informatics
    Citation Excerpt :

    However, users are still required to operate CAD with conventional devices like mouse, keyboard and so on, which is not conducive for the user to express design intention naturally. In the research community of Human-Computer Interaction (HCI), more and more intuitive interactive modes are applied to CAD [1,2,3,4], enabling the user to interact with CAD naturally by discriminating the design intention based on physiological signals. As the first step of model operation in CAD, object selection plays an important role in the whole modeling process.

  • A novel user-based gesture vocabulary for conceptual design

    2021, International Journal of Human Computer Studies
    Citation Excerpt :

    These activities were explored to different extents in a number of studies, and typically linked to 3D CAD systems. Hand gestures were used for 3D architectural urban planning (Buchmann et al., 2004, Yuan, 2005), cable harness design (Robinson et al., 2007), CAD (Dani and Gadh, 1997, Kim et al., 2005b, Qin et al., 2006, Holz and Wilson, 2011, Kang et al., 2013, Vinayak et al., 2013, Arroyave-Tobón et al., 2015, Huang et al., 2018), or manipulation of already created objects (Chu et al., 1997, Kela et al., 2006, Qin et al., 2006, Bourdot et al., 2010, Kang et al., 2013, Vinayak et al., 2013, Song et al., 2014, Beattie et al., 2015, Noor and Aras, 2015, Xiao and Peng, 2017), and virtual pottery (Dave et al., 2013, Han and Han, 2014, Vinayak and Ramani, 2015). The applications typically used free-form gestures for the creation of splines or surfaces that build up a 3D model (Chu et al., 1997, Buchmann et al., 2004, Kim et al., 2005a, Robinson et al., 2007, Holz and Wilson, 2011, Vinayak et al., 2013, Han and Han, 2014, Arroyave-Tobón et al., 2015, Vinayak and Ramani, 2015).

  • Gesture and speech elicitation for 3D CAD modeling in conceptual design

    2019, Automation in Construction
    Citation Excerpt :

    Studies have also investigated the terms that people naturally use to communicate shape and shape modifications [39]. While extant studies acknowledge the advantages of gestural interaction for 3D CAD modeling for conceptual design, they focus primarily on aspects such as accuracy and efficiency of gesture recognition [33,40–43]. Hence, these studies employ limited, author-defined gestures.

  • The challenges in computer supported conceptual engineering design

    2018, Computers in Industry
    Citation Excerpt :

    Lee et al. [44] tested this in a game environment but believed a similar approach could work for CAED. Song et al. [68] developed an intuitive interface GaFinC, which combined hand gestures and gaze. Both used gaze to select an object which an action would be performed on.

  • Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

    2017, International Journal of Human Computer Studies
    Citation Excerpt :

    This also does not require continuous eye hand coordination. Song et al. (2014) discussed a computer-aided design (CAD) application that used hand gesture for basic manipulation such as translation, zoom and rotation. The application only applied eye tracker to assist zoom by using the gaze position as the centre of zooming.

View all citing articles on Scopus
View full text