Keywords

1 Introduction

Virtual clothing environments have been made available by exploiting 3D technology enhancements. In particular, three-dimensional scanners and three-dimensional virtual clothing simulation applications have been developed, and thus, incorporated into solutions for the fashion industry. Furthermore, many companies are using automated systems to improve working efficiency and reduce costs in technical areas through the use of digital data, which come from the optimization of virtual clothing design [13]. Both 2D and 3D CAD/CAM systems are available to design virtual garments and textiles. 2D CAD systems permit to design the 2D patterns composing the garment and used to manufacture it. The 3D systems can be used to re-create and simulate the garment behavior assembling the 2D patterns on a 3D digital mannequin [4]. In a fully virtual approach, the tailor designs the garment around the customer’s avatar acquired by using body-scanner.

However, these systems do not take into account a previous phase, which is usually executed by the tailor on a mannequin in order to derive the standard measures and size of the final user. In this phase, the tailor gets the necessary measurements by using a flexible tape measure around the mannequin or the final customer in different postures. This paper introduces the Taylor’s LABoratory (TLAB), a low-cost mixed reality application that permits to emulate this phase as traditionally done by the tailor. First, we introduce the low cost technology available to develop mixed reality applications. Then, the TLAB is described as well as the developed Natural User Interface (NUI). Finally, preliminary tests and results are presented and discussed.

2 Low Cost Devices for Mixed Reality

In the last years, many devices have been designed to make possible the interaction with a virtual environment in a natural way where the user can interact with the application, as he/she should do in a real situation [57]. Many of them come from entertainment sector, are low cost and make available Software Development Kits (SDKs) for application development.

Among the several IT devices for mixed reality, there are two important categories: devices for 3D visualization and those for hand-tracking to interact with 3D environment in a natural way. The first category is represented by the last generation of head mounted displays, which permits to visualize a 3D virtual environment in the same way we can see the real world [8, 9]. In the last two years, the most important low-cost HMDs have been Oculus Rift (Fig. 1a.), HTC Vive (Fig. 1b.) and Google CardBoard (Fig. 1c.). The cost is reasonably low and, therefore, easily affordable for both users and developers.

Fig. 1.
figure 1

Head mounted displays: (a) Oculus Rift SDK 2, (b) HTC Vive and (c) Google CardBoard.

The second type of devices concerns the augmented interaction with 3D objects and worlds by using hands. Hand-tracking devices allow detecting hands with high precision with have the ability to track each single finger and thin objects held in hand. Among them, the most important and cheapest devices are the Leap Motion (Fig. 2a.) [10] and the Duo3D (Fig. 2b.) [11].

Fig. 2.
figure 2

Low-cost hand-tracking devices: (a) Leap Motion device and (b) Duo3D

In our research context, a third type of devices has to be taken into account, i.e. the 3D laser scanners to acquire 3D models of human body. Also in this case, low cost solutions, such as Microsoft Kinect v2 [12], are available. There are other low cost 3D scanners, which have been exploited for taking measures along human body, such as Apple PrimeSense Carmine (Fig. 3b.) and Occipital Structure Sensor (Fig. 3a.). Apple PrimeSense Carmine acquires 3D human model as a Microsoft Kinect, but it can be only used on IOs platform. For example, Apple PrimeSense Carmine has been used in an innovative solution where the sensors were positioned inside a cabin in order to acquire all measures of human body for custom made garments. Occipital Structure Sensors permits 3D scanning by using Apple mobile devices. Occipital Structure Sensor device makes 3D scanning technology portable and suitable for mobile applications.

Fig. 3.
figure 3

Occipital Structure Sensor (a) and Apple PrimeSense Carmine (b)

Fig. 4.
figure 4

Software architecture of TLAB

3 Tailor’s Laboratory

Our main goal has been to develop a virtual environment where the tailor can interact with the virtual customer as s/he traditionally performs. The Oculus Rift as head mounted display and the Leap Motion device as hand-tracking device have been selected since we have already positively experienced for other design applications. A body scanner solution based on four Microsoft Kinect v2 has been considered to acquire the customer’s body.

Figure 4 shows the TLAB software architecture where:

  • The Visualization ToolKit (VTK) [13] is used to manage the 3D rendering of the virtual environment. TLAB is composed by a set of widgets, such as sliders, buttons and the virtual tape measurement. Each operation inside TLAB is possible through the use of VTK.

  • Microsoft Kinect Fusion for Microsoft Kinect v2 is used for the 3D acquisition of human avatar.

  • The Leap Motion SDK for hand tracking permits to get data about hands/fingers position and orientation, which are detected by Leap Motion device in, real-time. The interaction with the 3D environment has been studied to provide the user with a long lasting interaction and reducing fatigue for arms and shoulders.

  • The Oculus Rift SDK 2.0 to track the head position and orientation permits the visualization of the 3D virtual environment with depth perception. Also in this case, Oculus Rift SDK is strictly linked with the 3D rendering of the scene.

Virtual widgets are automatically visualized in the user’s field of view of the Oculus Rift when 3D body shape is detected and thus, the user can start to take the standard measures using hands/fingers detected by the Leap Motion device. Each virtual object is rendered through the use of Oculus Rift SDK.

3.1 Framework for Synchronization

The synchronization among mentioned SDKs is possible thanks to a software framework developed in house, which contains a set of basic class able to simplify the use of the whole system. The framework is general purpose and fully independent of the application the developer wants to implement. In fact, it has been also used in other mixed reality applications that permit to emulate the traditional manufacturing process of a product, such as lower limb prosthesis.

This framework permits to exploit Oculus Rift SDK with VTK and Leap Motion SDK with no direct access to their basic SDKs. Furthermore, a set of methods are available to create a Natural User Interface [14] with the developed VTK widgets for mixed reality. In this way, each existent application can be re-designed in very simple way in order be used with a mixed reality environment.

As showed in Fig. 5, a simple application can be developed in few rows of code. A 3D model has been added inside the basic 3D environment that is rendered through Oculus Rift. The presented source code executes the following actions:

Fig. 5.
figure 5

The framework source code.

  • Framework initialization.

  • Load the model in STL file format.

  • Add the loaded 3D model inside 3D environment.

  • Set the of field of view according to the position of 3D model.

  • Start the rendering into Oculus Rift.

3.2 Development of the Virtual Tape Measure

By starting from the VTK widget named contour widget, a virtual tape measure has been implemented. It is a 3D line that can be modified using its control points. The customized version of the contour widget can be applied on a surface and each modification follows the surface of the 3D mesh on which it lies. The TLAB graphically and numerically visualizes on the avatar all the taken measures, so that the tailor can easily compares and evaluates their correctness. Figure 6 shows some examples.

Fig. 6.
figure 6

Basic measurements usually acquired by the tailor for a shirt.

The interaction with the virtual tape is done through Leap Motion device, which tracks the gestures to use the virtual tape measure.

The virtual tape can be selected and modified adding/removing/moving its control points using the index finger. To modify the path/length of the tape measure, the user has to touch the contour for 10 s, a new point is added and the length recalculated; if an existent control point is selected for more than 5 s, it becomes green and the user can move it along the customer digital body until he reaches the right position to get the measure. The measure is visualized in the field of view of Oculus Rift and changes each time the tape measure is modified.

4 Natural User Interface for TLAB

The NUI (Natural User Interface) can be defined as the set of gestures and actions made by the user during the interaction with the virtual environment. In our case, the traditional operations performed by the tailor have been analyzed when s/he manufactures a garment, such as a shirt or a jacket.

S/he uses continually the hands to take measurements around the human body as well as moving legs or arms of the customer to define the best posture to get correct measurements. Figure 6 shows some examples of measures taken by the tailor to design a man shirt and related postures. A limited set of gestures have been defined as follows:

  • Movements of index finger. If the virtual tape measure state has been activated, the user can interact with the tape measure as previously described.

  • Palm rotation. The palm rotation permits the rotation of the human avatar.

  • Horizontal movement of both palms. Leap Motion detects the distance between palms. Then, the increasing and the decreasing permit to zoom–in and out of 3D models.

The NUI has been implemented using the basic framework previously mentioned to synchronize Oculus Rift and Leap Motion device with the 3D environment. To manage the various interaction modes and the defined gestures, the NUI is constituted by a limited number of states, represented by a finite state machine (FSM) shown in Fig. 7. The legend explains how TLAB changes interaction mode according to the selected gesture.

Fig. 7.
figure 7

Finite State Machine developed for TLAB.

The upper part is the software interfaces to interact with Leap Motion device and Oculus Rift, the lower part describes the several modes developed to use TLAB.

Hands state permits the link between the basic software interface of Leap Motion device and TLAB. It controls the following events if there are no selected virtual widgets (e.g., virtual sliders and 3D buttons):

  1. 1.

    Hand with 90 degree between index and thumb and positional tracker of Oculus Rift deactivated. This event activates the Camera Mode.

  2. 2.

    Hand with 90 degree between index and thumb and positional tracker of Oculus Rift activated. This event activates the Object Mode.

  3. 3.

    Hand near to Leap motion (less than 10 cm). This event activates the Toggle Image Mode.

  4. 4.

    Both hands closed. This event activates the Rule Mode.

The Starting Mode state permits to monitor activation gestures, and update the indicators of the actual interaction mode (i.e., icons and images).

The Tape Measure state allows interacting with the virtual tape measure through a set of gestures that permits to define the path of the tape along the human body and thus, take the measurement for garment design (Fig. 7).

The Process Gesture state monitors the execution of basic gestures (e.g., circle, swipe and pinch) and permits to execute the actions associated to a particular gesture in the activated mode.

5 Tests and Preliminary Results

Preliminary tests have been carried out to verify the potential of the implemented solution especially with regard to the interaction with Leap Motion device and the use of the virtual tape measure. As mentioned, the geometry of a potential customer has been acquired using Microsoft Kinect v2. We selected a man shirt as case study and related standard measures and postures have been identified. Some examples are back shoulder width, sleeve length and the waist circumference. The Fig. 6 shows some example of measures, while Fig. 8 shows the mixed reality environment and an example of interaction by hands to get sleeve length.

Fig. 8.
figure 8

Initial interaction for taking sleeve measure and TLAB visualized inside Oculus Rift.

The application permitted to define necessary postures and acquire all customer’s measurements that have been also compared with the real ones. In general they are comparable and the differences were no more than ±5 mm. The most critical measures were the waist and chest circumferences that require a tailor’s expertise. However, further tests have been planned. First, we will consider different types of garments, such as pant, skirt, and dress, since different avatar postures could be necessary. Secondly, a significant numbers of testers, including tailors, with different levels of skills about IT tools will be involved. This will permit to perform a more detailed analysis of measures acquisition in a virtual environment and their reliability. Furthermore, a comparison between real and virtual tasks will be carried out.

6 Conclusions

This paper presents an application that permits to emulate the first phase of garment design process, i.e. customer’s measurements, allowing the tailor to interact with customer’s avatar as s/he traditionally does. Acquired measures can be later used to define the 2D patterns of the garment and, using a 3D CAD system, numerically simulate its behavior on the avatar. Therefore, integrated with a body scanner and a 3D clothing system, it will permit to fully virtualize the garment development process.

Even if first trials were successfully, further tests are necessary considering the variety of garments and users with different skills and expertise. This will not only allow us to validate the applications but also to acquire the tailor’s knowledge that could be embedded within the system.

Future developments have been planned to include a first automatic generation of the 2D garment patterns starting from taken measurements and integrate TLAB with 2D and 3D systems. New modules will be added to manage a database of postures and movements to be automatically applied to the customer’s avatar and guide the users during measures detection. Finally, this research work has been developed within the framework of a national industrial project, named BODY-SCAN and we have planned to test the developed application with the textile-clothing companies involved in the project.