Keywords

1 Introduction

Sculpturing a physical model in real world is easy and common for a designer. However, in the 20th century, with the development of “Computer-Aided Design/Computer-Aided Manufactured (CAD/CAM)” (Lichten 1984), modeling has increased efficiently, but other problems came along to untrained users: the users need to learn how to use the 3D software via complicated and inconvenient interfaces, especially for the novices. Thus, researchers focus on the 3D input problem for many years (Aish 1979; Lim 2003; Dickinson et al. 2005; Jackie Lee et al. 2006). Aish (1979) aiming that 3D input systems should be able to create and modify 3D geometry intuitively in order to interpret and evaluate the spatial qualities of a design directly. However, in the 1980s, the CAD interface was limited to text-based commands; in the 1990s, it was limited to the windows, icons, menu and pointers-based (WIMP) and text hybrid dominated CAD system. We can see that in most 3D modeling systems, text-based command system and Graphic-User Interfaces (GUIs) are still the mainstream.

The keyboard and mice are essential for users to type in and select commands since the first CAD software—“Sketchpad” (Sutherland 1964), to recent CAD softwares—“Maya”, “3D studio Max” and “Rihno”. In order to create a more intuitive and friendly interface for users, Ishii and Ullmer (1997) suggested a new concept Tangible-User Interfaces (TUIs), which was able to create a seamless interaction across physical interfaces and digital information. Jackie Lee et al. (2006) proposed a TUI based “iSphere” to act as a hand sensor to manipulate 3D modeling in a realistic and spatial way. Lee and Ishii (2010) created a “Beyond and Collapsible pen” system, which was able to use gesture and pen tool to manipulate 3D model. Also, to increase the 3D modeling efficiency, Sharma et al. (2011) developed “MozArt” based on speaking, touching, and a toolbar/button-less interface for creating computer graphics models.

On the other side, Brain-computer interface (BCI) has been applied to the various fields: psychology recognition research from the original brainwaves (Paller and Kutas 1992), BCI embedded robot arm control through the imagination (Chapin et al. 1999; Lebedev and Nicolelis 2006), BCI game (Krepki et al. 2007), Virtual Reality (VR) navigation (forward and backward, left and right) through the imagination (Leeb et al. 2004) and BCI embedded smart space design (Huang 2006, 2011).

2 Problem and Objective

Different softwares usually have a variety of different ways to reach same commend functions. For instance, the ways of commending simple and frequently used functions, such as rotating and zoom in/zoom out object” are totally different in Maya, 3D studio Max and Rihno (Fig. 1). Moreover, it is even more challenging if crossing different 3D modeling softwares, such as between Maya and 3D studio Max, is required. Users are required to either memorize the multiple hotkey (keyboard + mice) or use the graphic icon to finish the action. This could sometimes be confusing and time-consuming. To increase the efficiency of 3D modeling, CAD users usually memorize complicated hotkey combinations for different softwares.

Fig. 1.
figure 1

Comparison between BCI CAD system and traditional CAD system

Hence, the goal of this research is to develop a more intuitive and natural way of commending frequently used functions in 3D CAD modeling across different softwares. We combine BCI into CAD system to generate a user-friendly interface in 3D CAD manipulation (see Fig. 2). By monitoring brainwaves emitted when intending to perform different commands, users are able to intuitively control 3D rotation, and room in/out command through imagination rather than relying on the traditional input commands (keyboard + mice or graphical icons).

Fig. 2.
figure 2

Concept of BCI-CAD system

3 Methodology and Steps

In order to implement the “BCI embedded CAD” system that the users can manipulate commands through “imagination”, the methodology of the research is divided into three steps: the first step, the BCI training process via EPOC+ for new users; the second step, BCI embedded CAD system implementation; the third step, scenario demonstration and evaluations.

3.1 The First Step: BCI Training Process via EPOC+ for New Users

For the past BCI devices, a long time training process is mandatory for users’ brainwaves to be recognized at high accuracy. To ensure that the EPOC+ can be adapted quickly and widely for every users, the first step of this research is to find out how to create a short and effective training process for new users. After the training process, the users should achieve 90% accurate rate to control virtual objects in the 3D environment.

3.2 The Second Step: System Implementation

This step divided into three parts: EEG data acquisition and analysis, digital signal process and interactive command mapping and BCI combined with 3Ds Max by using C++ and maxscript (see Fig. 3).

Fig. 3.
figure 3

BCI-CAD system framework

3.3 The Third Step: Scenario Demonstration and Evaluations

In order to demonstrate the system prototype, the subjects are asked to build a box and try to use imagination command to zoom in/out the box in 3Ds Max. To evaluate the BCI-CAD efficiency, we created two different experiments to demonstrate the system.

4 BCI Training Process via EPOC+ for New Users

4.1 EPOC+ Installation

In the first part, in order to acquire accurate signal, the subjects follow the steps to install the EPOC+ system (see Fig. 4). First, the user has to wear the EPOC+ to his head. Second, the user opens the EPOC+ software to check the contact quality of each sensor (see Fig. 4-A). The sensor electrode figures correspond to the quality of the contact quality: from green, yellow, orange, red and black indicating high to low quality. Third, the user starts to train the neutral and command (push, pull, left or right) brainwaves (see Fig. 4-B). Forth, the user tries to use imagination command to move the virtual box (see Fig. 4-C).

Fig. 4.
figure 4

EPOC+ installation (Color figure online)

4.2 Experiments of BCI Training Methods

To enable the users to control the viewport in 3Ds Max through EPOC+, we need to design a training process that successfully zooms in/out of the virtual box via imagination. Therefore, the research first establishes an experiment procedure that enables achieving more than 80% of accuracy to control the imagination command—valid signal. “Valid signal” means, in 5 s, after the voice command (e.g. pull), the user can successfully complete a 3Ds Max command (zoom in the virtual box) via imagination through the EPOC device. Just like the fingerprints, different users exhibit different patterns of EEG brainwaves. To achieve high accuracy, each user has to finish the training process before using the BCI embedded CAD system (Fig. 5).

Fig. 5.
figure 5

EEG data acquisition steps via EPOC+

Therefore in order to find out a short and effective training process for new users, we go through three different types of training procedure. In each training experiment, the user will test the “imagination command” by randomly testing “push” or “pull” command for 50 times to evaluate the accuracy (see Table 1).

Table 1. BCI training experiments

Training Experiment Type 1

The users first go through the neutral training 1 time and the push training 1 time and then try “imagination of push test” 10 time. The users would repeat the previous action for 10 times. Second the users go through the pull training 1 time and then tries the “imagination of pull test” 10 times and then repeat the previous action for 10 times. Finally, the users test the imaginations of push or pull for 50 times. The accuracy of imagination of push or pull mix command is 76%.

Training Experiment Type 2

The users go through neutral training, push training and pull training for 10 times. Next, the users test the imaginations of push or pull for 50 times. The accuracy of imagination of push or pull mix command is 84%.

Training Experiment Type 3

The users go through each of neutral training, push training and pull training for 20 times. And then the users test the imaginations of push or pull for 50 times. The accuracy of imagination of push or pull mix commands is 84%.

As shown in Table 1, we found that longer training process in Experiment type III did not increase the accuracy rate as compared to Experiment type II. Our data suggests that new users can achieve the 80% accuracy after completing experiment type II, which takes approximately 20 min to finish.

5 System Implementation

In our proposed framework, there are two building blocks, namely, 3ds Max plug-in and Emotiv API, respectively. The main task here is to provide an interface between 3ds Max plug-in and Emotiv API to realize the communication in one direction from EPOC+ hardware to 3ds Max software. Therefore, the requirement is just a single plug-in file in 3ds Max by using the Emotive API (Fig. 6).

Fig. 6.
figure 6

Steps of EPOC+ training process

Customizing our own plug-in function in 3ds Max is simple since the software itself consists of many plugins. The most well-known part is the user interface around the 3ds Max, which allows users to set their own way via Maxscript or dynamic link library. All user-specified plug-ins can accomplish by using 3ds Max SDK. Hereby, to our proposed system, the plug-in is event-triggered by the zoom in/out signals from EmoEngine, provided by Emotiv API and response zoom in/out of the object in 3ds Max. By reading the state of the user’s brainwaves through the EPOC+ and provide signal to judge whether the object should zoom in or out is the main program of our plug-in software.

When one completes the training step, he or she can simply download the plug-in and start to customize the zoom in/out feature by imagination (Figs. 7 and 8).

Fig. 7.
figure 7

System framework

Fig. 8.
figure 8

The relationship between EmoEngine and Emotive

6 Scenario Demonstration and Evaluations

In order to evaluate the BCI-CAD system, we created two different experiments. And we found a 3Ds Max new user as our subject to get through the following experiment without telling him/her the hotkeys of the “zoom in” and “zoom out” in 3Ds Max. We asked him/her to find out commends either through the graphical icon or through imagination. The following experiments include: (1) traditional GUI-CAD experiments; (2) the BCI-CAD experiments.

6.1 Traditional GUI-CAD Experiments

We taught him/her to create a box in the viewport at 3Ds Max, and then we asked the subject to find out the commends of “zoom in” or “zoom out” to “push” or “pull” the virtual object in GUI. Then the subject was asked to speak loudly in every step before performing the GUI commands. We found that the new users could spend more than one minute looking for the “zoom in” icon. But once the user learned where to find the zoom in commend, they could easily find the “zoom out”. The subjects generally spent approximately 2 min finishing the zoom in, zoom out tasks. The manipulate intention completely matches the intention.

6.2 BCI-CAD Experiments

Before starting the BCI-CAD experiment, the subject is asked to wear EPOC+ headset, and ensuring good electrode contact quality (electrode turn to green color). The subject was asked to create a box through mice and keyboard (just like in the previous experiment). Since we need to verify if the BCI-CAD system is more intuitive than GUI command based CAD, we asked the subject to use the mind command to control the virtual box in 3Ds Max instead of using the mice and keyboard (see Fig. 9-D, E, F). Meanwhile, the subject must speak loudly when imagining the commands.

Fig. 9.
figure 9

Scenario demonstration: (A) user is wearing the EPOC+ headband, and check the electrode contact quality is good; (B, C) the user starts building a box in 3Ds MAX through mouse and keyboard; (D, E) the user starts to use imagination command to zoom out the box of the viewport; (F) the user smoothly zoom in the box of the viewport. (Color figure online)

As shown in Fig. 10, we could observe if the user’s intention match the mind command. Sometime the real-time mind command was delayed due to the problem of maxscript and the huge 3Ds Max software. Sometimes, the BCI-CAD was triggered two times (see Fig. 10) with only one intention from the subject. However, almost every activity (zoom in or zoom out command) can be recognized through imagination. Without learning how to use keyboard or mice, the user could easily and naturally control the 3Ds Max viewport (Table 2). The manipulate intention also completely match the mind intention.

Fig. 10.
figure 10

Representation of real time mind-command activity

Table 2. Statistics of the intention and mind command

7 Conclusion

The research creates a “BCI embedded CAD” system prototype, which build the connection between EPOC+ 3Ds Max via maxscript and C++. Through the system, the user is able to enhance the ability in 3D modeling through “thinking the commands”. The training procedure is simple and efficient and users could finish the training process within 20 min. Furthermore, the users are able to achieve 80% accuracy by using the mind command to push or pull the viewport in 3Ds Max.

However, as in the scenario demonstration, sometimes the user used both “mind command” and “physical interface (keyboard or mice)” to control the virtual box. Therefore, the future study would focus on how to apply “BCI embedded CAD” system to cross 3D CAD platforms (3Ds Max, Rihno, and MAYA). The BCI embedded CAD users can perform more efficiently than traditional users.

As to the contribution, the research is significant not only in the architecture engineering but also the design fields. In our prototype, the system can only control on/off signal (pull or “invalid pull”) to zoom in/out the viewport. The future study would be focused on how to partially adjust different viewports.