Keywords

1 Introduction

1.1 Background and Purpose

Based on “Virtual Reality Science” [1], virtual reality (VR) is a technology aimed at using an environment where a human is substantially equivalent to the real environment. It has self-projectability (SP), three-dimensional spatiality (TS), and real-time interactivity (RI). SP refers to the capacity to realize a consistent state between different human sensory modalities. Furthermore, TS and RI can be measured using objective values. However, it is challenging to evaluate SP because it is subjective to feelings. Gallagher [2] defined “a feeling that is causing a particular exercise or a feeling that it is oneself, not anyone else” as the sense of agency (SoA). SoA, or sense of embodiment (SoE) as a broader concept, is considered to be the same as the concept of SP. Moreover, it well-known that SoA is initially studied in the field of cognitive neuroscience. There are several ongoing types of research on the characteristics of SoE subjectivity even in the real environment. Among these researches, Hannah et al. [3] advocates the usefulness of applying the concept of SoA in the field of neuroscience to human-computer interaction (HCI). HCI is a field of research that is concerned with communication between humans and computers. In recent years, research that incorporates VR technology as a theme has been on the increase. Besides that, SoE is subjective to feeling; its evaluation can be quite challenging; moreover, an index for quantitative evaluation of SoE is unknown. Successful quantitative evaluation of SoE will undoubtedly lead to quantitative evaluation of all the three elements of virtual reality, which will significantly contribute to the development of VR technology.

This paper aims to elucidate the contribution of haptic and visual sense to SoE in a VR environment by physiological behavior measurement. To achieve the goal of this paper, we employed a rod tracking task which passes a rod held in hand along a sinusoidal path.

2 SPIDAR-HS

2.1 SPIDAR

SPIDAR is haptic devices configured by module(s) that uses a motor with a rotary encoder and a string. The module displays force sense on the end effector attached at the end of a string by the motor and measures its position by the encoder. Various kinds of haptics devices have been devised and constructed using the SPIDAR system. For example, SPIDAR-G is a 7-DOF haptic device; SPIDAR-I is a string built in a six-degree-of-freedom haptics device, and SPIDAR-S is for mobile devices using sound output. The SPIDAR system proposed in this paper which is extended to the human scale is named SPIDAR-HS. Moreover, we have improved this experimental environment. Figure 1 shows a SPIDAR-HS system. The green clothes are used for chroma keying.

Fig. 1.
figure 1

SPIDAR-HS. (Color figure online)

2.2 System Configuration

Device

  1. (a)

    VIVE Pro HMD

VIVE Pro HMD is used for immersive visual display, and the specifications are shown in Table 2.

VIVE Pro HMD can track 360° in a play area of at least 2 × 1.5 m up to 4.5 × 4.5 m. By using the front camera, user can check the surrounding situation while wearing the HMD. With this camera and the chroma key, user can project real hands and specific objects in the virtual world constructed by CG. The base stations, necessary for tracking HMD in the play area, are installed diagonally on the frame of SPIDAR-HS.

  1. (b)

    SPIDAR-HS

SPIDAR-HS has been improved to solve the problem in [5] which was found by the previous experiments. We applied two-point control to rod tracking task. The previous rod tracking task was performed using a 6DOF SPIDAR-GCC system. This system has a control method enabling the position measurement and force display with 3DOF position and 3DOF rotation on an end effector by the eight modules. In this control method, there was a problem of the rotation of the virtual end effector becoming unstable and the force sense presentation becoming inaccurate due to the shape of the end effector. Therefore, to avoid this problem, we adopted a two-point control system. This system consists of two 3DOF haptic points that use four modules each (Table 1).

Table 1. VIVE Pro HMD specifications [4]

The benefit of this system is it controls two points. In instances where it attaches two points to an index finger and a thumb respectively, the user can pinch and move an object with one’s finger with the haptic sense.

In this system, the end effector is a rod that attaches each point to two different positions on it. The SPIDAR-HS system can present 5DOF haptic sense and measure 5DOF position and posture, except for the role of the rod.

The end effector in the rod tracking task is a rod with 501 mm length and 10 mm diameter. The virtual end effector is an end effector that exists in the virtual environment and plays a role of reflecting the position and rotation of the actual end effector and visually presenting an appropriate end effector.

We also changed the motor and encoder. Tables 2 and 3 show the motor and encoder specifications, respectively. The string used is the fishing line of PE No. 6.

Table 2. Motor specifications [6]
Table 3. Encoder specifications [7]

Software Configuration

  1. (a)

    Force feedback.

Figure 2 shows the system configuration of the constructed device. The visual and haptic information is given by VIVE Pro HMD and SPIDAR-HS, respectively. Unity is a developing environment for 3D programs.

Fig. 2.
figure 2

The SPIDAR-HS system configuration diagram.

The flow of force sense presentation in this system is shown below.

  1. 1.

    The control unit reads the change in length of each string connected to the end effector.

  2. 2.

    The position on both ends of the end effector is notified to the unity via the Spidar.dll.

  3. 3.

    The force to be presented is calculated from the difference in position between the virtual and real end effector.

  4. 4.

    Passing the force to be presented to spidar.dll, the dll issues commands to some specific motors to wind up their strings.

  5. 5.

    The specified motors start winding up based on the commands, the end effector is drawn to the designated direction, and the force sense is presented to the user holding it.

The haptic update frequency is 50 Hz, and the minimum guaranteed frequency is 33 Hz.

Details of the haptic model in the rod tracking task are described below. Figure 3 shows a conceptual diagram of the haptic model, where \( \vec{\varvec{v}} \) is the normal vector of the contact point calculated by Unity, and \( \vec{\varvec{N}} \) is the force calculated by the spring-damper model.

Fig. 3.
figure 3

A haptic model for rod tracking task.

Using the notations in Fig. 3, the force sense to be presented is given by Eqs. (1) and (2).

$$ \boldsymbol{\overrightarrow{N}}= K \cdot \left( {\overrightarrow{{P_{\text{R}} }} - \overrightarrow {{P_{\text{V}} }} } \right) + B \cdot \left(\dot{\overrightarrow{P_{\rm R}}}-\dot{\overrightarrow{P_{\rm V}}}\right); $$
(1)
$$ \overrightarrow {{\varvec{N}_{\varvec{V}} }} = \frac{{\vec{\varvec{v}}}}{{\left| \varvec{v} \right|}} \cdot |\vec{\varvec{N}}|; $$
(2)
$$ \left\{ {\begin{array}{*{20}c} {\overrightarrow {{\varvec{f}_{1} }} = \frac{{L - \overline{\text{GC}} + \overline{\text{GT}} }}{{\overline{\text{GC}} }} \cdot \overrightarrow {{\varvec{N}_{\varvec{V}} ;}} } \\ {\overrightarrow {{\varvec{f}_{2} }} = \overrightarrow {{\varvec{N}_{\varvec{V}} }} - \overrightarrow {{\varvec{f}_{1} }} ,} \\ \end{array} } \right. $$
(3)

where \( K \) and \( B \) are the spring and damper constants, respectively. The force shown in Eqs. (1) and (2) is based on the assumption that the contact point and the upper end of the rod are on the same side as seen from the gripping point. If this assumption is not satisfied, that is, if the collision point and the upper end of the rod are on different sides as seen from the gripping point, it is necessary to replace \( \overrightarrow {{\varvec{f}_{1} }} \) with \( \overrightarrow {{\varvec{f}_{2} }} \).

  1. (b)

    Chroma key

Chroma key is a technique for transmitting a specific color component, thereby allowing a part of an image to pass through and combine with another image. By using the chroma key, a user’s real body can be displayed in the VR environment. Projecting a real body in the VR environment is expected to raise the SE to a great extent. In the preparatory experiment, we ensured VIVE’s front camera can be used as the chroma key camera. The image acquired by the camera is processed by the surface shader* based on the following algorithm below, it is displayed on the quad**, and synthesized in the VR environment.

We remark that the surface shader uses the following values: color to be transparent (KC), hue distance (N), saturation distance (SN), transparency gradient (G), and transparent hue distance (TN). The shader converts the image from the RGB color space to the HSV color space. The absolute value difference between the converted color and KC is calculated, and if the hue component difference is smaller than N and the saturation component is larger than SN, the pixel drawing of interest is discarded. Additionally, when the hue component difference is smaller than TN, and the saturation component is larger than SN, then the transparency of the pixel of interest is reduced based on the value of G. Figure 4 shows the appearance of the chroma key. One can see that the hand in the center of the screen is cut off from the wall of the upper screen side.

Fig. 4.
figure 4

Chroma key situation.

*Surface shader: Simplified description version of vertex shader and fragment shader.

**Quad: Unity primitive game object, rectangle mesh with four vertices.

3 System Evaluation and Rod Tracking Task

3.1 Measurement of Position Error in the Working Space

Our purpose, in this section, is to measure the position error in the working space and evaluate the constructed environment. The workspace is about 1 m cube around the center of SPIDAR-HS. When the origin of the coordinates is set to the center of the device, the end effector is moved at a uniform speed for about 10 s at (−0.5, −0.5, and 0.5) corresponding to one of the vertices of the working space cube. Since we adopted a two-point control, the position coordinates of the two virtual end effectors point 1 and point 2 in the unity were recorded. This was repeated ten times.

3.2 Rod Tracking Task

Outline of the Experiment

This subsection aims to investigate and clarify the influence of force sense presentation of the rod tracking task in a VR environment by EMG measurement.

Figure 5 shows the experimental outline, while Fig. 6 shows the end effector used in this task. The end effector presents haptics by controlling the tension on the strings attached on both ends of the end effector. We set the position of the virtual end effector at the midpoint of the two points. When the virtual end effector touches the boundary of the path, myoelectric potential measurement is carried out.

Fig. 5.
figure 5

Rod tracking task experimental outline diagram.

Fig. 6.
figure 6

An end effector used in rod tracking task.

Detail of Rod Tracking Task

Table 4 shows a detailed condition of the rod tracking task.

Table 4. Detailed conditions of the task.

The experimental procedure is shown below.

  1. 1.

    Leave the power of the right arm unplugged. At this time, the experimenter measures the myoelectric potential for 10 s as the myoelectric potential at rest.

  2. 2.

    Wear the HMD and grip the end effector.

  3. 3.

    Touch the virtual end effector at the starting point (the far end of the path), count from 1 to 5 orally for about 5 s, and move the rod along the path to the turning point (the near end of the path).

  4. 4.

    Touch and count again at the turning point and pass the rod along the path to the starting point.

  5. 5.

    When the participant returns to the starting point, count again.

In this task, the experimenter made the participant sit down so that his body will be parallel to the desk, and instructed to do tasks with only the arm without moving the head and body as much as possible. After conducting step 1 to 5, a subjective evaluation questionnaire was conducted. Under these conditions, the task was repeated until either the number of contacts between the rod and boundary of the path became less or equal to ten times, or the number of contacts increased.

Items on the subjective evaluation questionnaire include: “In which section did you touch the boundary of the path? (with reasons of contact),” “What did you do to avoid touching?”, also, “Which section was the easiest? And which section was most challenging?” Twelve right-handed males in their twenties were used in this experiment.

4 Result and Discussion

4.1 Measurement of the Position Error of Working Space

Let one of the points of the two-point control be point 1, and another be point 2. In the rod tracking task, the position of the virtual end effector is the midpoint of the two points. Table 5 shows the relative error and coefficient of variation for each x, y, and z component of the virtual end effector of the conventional system, as well as the relative error of the virtual end effector.

Table 5. Relative error and coefficient of variation of the virtual end effector.

It is evident from Table 5 that the experimental result is not significantly different from the previous result obtained in [7]. Here, it is found that the relative error is less than the one in the previous research in [7]. The inaccurate force sense which was a problem in the research conducted in [7] was improved on by introducing, in this paper, the two-point control and the force which was not measured but felt as correct was displayed. Based on the experimental results, significant accuracy was ensured in the workspace.

5 Conclusions

The purpose of this paper is to construct an experimental environment for establishing quantitative evaluation index of SoE. To achieve this goal, we constructed a VR environment and conducted physiological behavior measurements during a task conducted therein.

In the rod tracking task, there was a problem with the force sense presentation in the previous environment. To solve this problem, we implemented the presentation of force by a two-point control technique. Besides, the lack of visual information such as shadows was improved.

Hence, in this paper, we achieved the following: First, the position accuracy of SPIDAR-HS proposed in this paper is considered to be good. Second, using the two-point control system, the force was presented at a more considerable level than the previous method.

From the above findings, we list the problems and prospects in this research. More accurate presentation of haptic information is required. In this research, haptics relies on the magnitude of force by spring-damper model. However, the rod tracking task requires more precise haptics. In such a case, it is considered necessary to devise a force presentation method by adopting a different force presentation model or multiple models. Finally, the visual presentation in this research has some problems such as the lack of the user’s hand, and partially different from the real one in the appearance. This seems to be closely related to SoE. Therefore, it is a problem to be solved by introducing the chroma key technology mentioned in Sect. 2.2 Software Configuration (b). Moreover, it seems that it is crucial to clarify whether the haptic and visual improvements described above have any effect on the myoelectric potential, and also establish the SoE index.