Elsevier

Computer-Aided Design

Volume 38, Issue 3, March 2006, Pages 210-223
Computer-Aided Design

A new modeling interface for the pen-input displays

https://doi.org/10.1016/j.cad.2005.10.007Get rights and content

Abstract

Sketch interactions based on interpreting multiple pen markings into a 3D shape is easy to design but not to use. First of all, it is difficult for the user to memorize a complete set of pen markings for a certain 3D shape. Secondly, the system will be waiting for the user to complete the sequence of the pen markings, often causing a certain mode error. To address these problems, we present a novel, interaction framework, suitable for interpretations based on single-stroke marking on pen-input display; within this framework 3D shape modeling operations are designed to create appropriate communication protocols.

Introduction

Most of the 3D modeling systems of WIMP1 style do not match many important benefits of traditional tools such as pencil on paper to communicate design ideas at an early stage [8]. One of the main advantages of pencil-on-paper interface is that it allows ambiguous sketching, which is quite opposite to the existing sketch-based approaches having uniform binding similar to UNIX command line interpreter (i.e. one pen marking results in one command) and does not distract the focus of the attention from drawing task (e.g. for selecting menu buttons). Pen-input display implements most features of the pencil-on-paper paradigm although the pen-input display still have limited resolution and the display is not as handily manipulated as paper. Meanwhile, it is still questionable freehand drawings can be quickly and directly led to a feasible 3D model for engineering purposes. For this, many sketch-based approaches [2], [4], [9], [12], [15], [19], [25], [33] have been suggested; user inputs pen markings and the system interprets them as a geometric primitive or as a modeling operation. However, providing interpretation to the pen markings, called informal interface [19], [20], has posed the following problems:

  • Uniform binding: Sketch-based systems interpret each pen marking and convert it into a primitive component (or a command). However, not all the pen markings carry a definite user intention, since they often come out of a vague idea.

  • Geometric ambiguity: Geometries contained in a 2D pen marking may be insufficient to be used to infer 3D geometries, requiring further user interventions.

  • Memorability: Usually a sequence of pen markings are required to constitute an interpretation into a 3D shape (e.g. a box, a cone, etc.). Therefore, it becomes even harder for user to remember the constituents of the pen sequence.

The problems listed above can now be rephrased: ‘inappropriate communication protocols between the user and the system’. To handle these problems, research needs to span topics from user interface design to 3D modeling. However, the earlier 3D sketch systems seem to put aside the issues of user interactions, while immensely emphasizing the importance of 3D modeling functionalities.

Contributions: We solve the problems by (1) reducing the number of pen markings that should be recognized for 3D modeling operations (mostly arrow-shaped pen markings), (2) further simplifying the multiple-stroke based pen markings of the other approaches [2], [12], [25], [33] into single-strokes, (3) however, diversifying the number of shapes that can be made from the recognized pen markings through an interaction framework, but not by adding more recognizable pen markings burdening the users in remembering them, which, meanwhile, was often the case for other sketch-based 3D modeling approaches. Our design of the sketch-based 3D modeling system is based on the following key ideas:

  • Interaction framework: We generalize an interaction framework of pen-based modeling system to both 2D and 3D using immediate- and selective-manipulation. All the input pen markings fall into one of the two techniques so that a certain feedback can be always given to the user (Section 4).

  • 3D shape modeling: We provide a novel 3D modeling environment that requires only a few single-stroke pen markings supporting the following diverse CAD operations:

  • A consistent style of model generation based on the generative modeling paradigm [29] such as extrusion, sweeping (translation, rotation, freeform), lofting, filling.

  • Construction tools: positioning the working axis and working plane and model transformations such as axis-aligned translation, rotation, and scaling.

  • Sketching gesture: We devise most of recognizable pen markings for the generative modeling to have an arrow shape in common. Since this gesture is used in every corner of sketching practices to represent a certain behavior or relation between entities, the user can remember and try them out more easily and instantly.

The problem of uniform binding has not been completely resolved but relieved by introducing the selective-manipulation technique. Geometric ambiguity has been overcome by embedding the interaction framework into the modeling pipeline; A certain direct manipulator follows after user pen marking as a feedback with which user can refine her/his intention. Memorability issue has been improved by adopting the generative modeling paradigm and introducing a sketching gesture: most modeling operations are invoked from the ‘arrow’ marking that resembles the modeling behavior of the generative modeling.

Paper organization: The rest of this paper is organized as follows. We survey previous work on sketch-based modeling in Section 2, and give an overview of our system in Section 3. In Section 4, we describe our interaction framework for pen-based modeling generalized for both 2D and 3D modeling. Also, we build the 3D modeling environment consisting of various generative modeling commands based on the interaction framework. In Section 5, we discuss the implementation issues and present the user testing results of the modeling system. We conclude this paper and present future work in Section 6.

Section snippets

Previous work

We give a brief survey of interaction and modeling techniques used in other sketch systems, and discuss the pros and cons of using the techniques provided by these systems. For more extensive survey of the field, we refer the readers to [21], [16]; for a comparison between many sketch-based systems in a wider technological view, see [21]. In Section 2.1, we discuss interaction techniques developed mainly for 2D planar drawings and other recognition techniques such as voice recognition, although

System overview

In this section, we give an overview of our sketch-based modeling environment and, in the following sections, we explain two important issues relevant to devising such an environment in detail, i.e. interaction framework and modeling framework. We begin this section by defining terminologies that we use throughout the paper.

Interaction framework

In this section, we present the interaction framework of our pen-input based 3D modeling environment supporting two different types of manipulation: immediate-manipulation combining the interpretation and the direct manipulator and selective-manipulation combining the two more selectively at the discretion of the user. The former improves the awkward input behavior of the two-phase interaction—which was discussed in Section 2—and the selective-manipulation permits as-is drawings, thus, allowing

Implementation, analysis and evaluation

We now discuss the implementation issues of our sketch system and also present user testing results that we conducted to verify the effectiveness of the system.

Conclusions and future work

We have presented an interaction framework for sketch-based shape modeling on the pen-input displays. It encases two different interaction techniques, namely, immediate- and selective-manipulation. These interaction techniques alleviate the memorability problem of existing multiple-stroke based sketching systems, and can create diverse 3D shapes that might be otherwise limited by single-stroke pen markings of our approach. Although the idea inserting a further interaction step to sketching is

References (33)

  • Gross MD, Do EY. Ambiguous intentions: a paper-like interface for creative design. In: Proceedings of ACM Symposium on...
  • Tracy Hammond, Randall Davis. Ladder: a language to describe drawing, display, and editing in sketch recognition. In:...
  • Hong Jason, Landay James, Long Chris, Mankoff Jennifer. Sketch recognizers from the end-user's, the designer's, and the...
  • T. Hwang et al.

    Recognize features from freehand sketches

    ASME Comput Eng

    (1994)
  • Igarashi T. Interactive beautification: a technique for rapid geometric design. In: UIST'97; 1997. p....
  • Igarashi T, Hughes JF. A suggestive interface for 3d drawing. In: UIST;...
  • Cited by (11)

    • The challenges in computer supported conceptual engineering design

      2018, Computers in Industry
      Citation Excerpt :

      All three interfaces were in the early stages of application (for example the system developed by Shesh and Chen [66] only supports drawn edges that are straight lines), but showed promise at the time and researchers were hoping to explore ways to produce more effective work flows for the pen input based interfaces in the future. Times required to learn how to use these interfaces and time spent performing tasks using them were comparable or sometimes even shorter than those in the conventional commercially available CAED systems [38]. Brain-Computer Interfaces (BCI) are the latest development in human computer interfaces, and they use EMG (Electromyography) and/or EEG (Electroencephalography) signals for design information input, either by feature modelling or imagining shapes.

    • Editing 3D models on smart devices

      2015, CAD Computer Aided Design
      Citation Excerpt :

      Therefore, in this research, we focus on gestural modeling in order to create editable 3D models that can be modified at a later point in time. There has been one modeling study involving sequential pen-input strokes: Kim et al. 2006 [10]. This study also pointed out the recognition problem associated with the reconstructive modeling of complex models and presented a method to generate complex CAD models using sequential single strokes.

    • A procedural method to exchange editable 3D data from a free-hand 2D sketch modeling system into 3D mechanical CAD systems

      2012, CAD Computer Aided Design
      Citation Excerpt :

      Interpretation is a process of inferring a modeling command corresponding to a sequence of the recognized strokes. In this study, existing methods are used for stroke recognition and stroke interpretation for gestural modeling [6,29]. A prototype system implemented in this study recognizes seven types of operational strokes or geometry strokes, as shown in Fig. 5.

    • Maintaining the Descriptive Geometry’s Design Knowledge

      2022, Lecture Notes in Networks and Systems
    View all citing articles on Scopus
    View full text