Keywords

1 Introduction

Recent studies on human-computer interaction have been leading towards the development of systems in which humans would be able to interact with computers more naturally [15, 16, 22]. For example, there are several developments on integrating voice commands into the computer systems or personal mobile devices in order to request information or to command actions. There is also an increasing popularity for the uses of Virtual Reality (VR) and Augmented Reality (AR) in several applications so that the users can interact with immersive environments. The users must perceive themselves in a state of presence in order to achieve the key goal of VR and AR systems [6, 10, 17, 18]. Thus, a hand motion tracking system was proposed and developed [13, 14] as one of the alternative ways for human to interact more naturally within 3D environments. The hand motion tracking system utilizes an inertial measurement unit (IMU) to determine the orientation of the user’s hand, and infrared cameras to determine the position of the hand in 3-dimensional space, in real-time. This paper will focus on the implementation and evaluation of orientation correction algorithms that were used to improve orientation tracking using IMUs.

The gyroscopes in IMUs provide the inertial measurement in the form of angular velocity. Theoretically, the orientation (angle) can be calculated by mathematical integration. However, most commercial-grade Microelectromechanical Systems (MEMS) gyroscopes may generate output signals that deviate from zero even when there is no rotational input applied on them, called “bias offset error”. This error is a cause of the common phenomenon in orientation tracking called “drift”, which can grow proportional to time. The drift severely causes problems in navigation and other applications that utilize commercial-grade MEMS gyroscopes to determine the orientation [3, 19]. Kalman-based orientation filtering is a common method to improve orientation tracking and eliminate gyroscope drift in inertial measurement units [9, 20, 21]. In addition, several studies employ sensor fusion approaches, in which the measurements from accelerometers, gyroscopes, and magnetometers are combined to determine the orientation [2, 7].

This paper statistically evaluates the effects of three different orientation correction algorithms on the orientation output errors in the form of Euler Angles (Phi, Theta and Psi). The three orientation correction algorithms are: (1) the orientation correction using fixed bias offset (FB), (2) the correction using the Kalman-based orientation filtering streamed directly from the Yost Labs 3-Space sensor module (KF), and (3) the proposed orientation correction algorithm using the gravity and magnetic North vectors (GMV).

2 Methodology and Materials

2.1 Hand Motion Tracking System

The 3D hand motion tracking system shown in Fig. 1 is capable of determining the hand position and orientation in 3-dimensional space. The infrared cameras V120: Trio in Fig. 1(a) located above the computer screen can be used to track the position of an infrared dot marker attached on the glove. The 3D coordinate of the marker represents the position of the wrist of the hand. The orientation of the hand is determined by Yost Labs 3-Space sensors attached at the back of the hand, as shown in Fig. 1(b). Yost Labs 3-Space module consists of tri-axial accelerometer, gyroscope and magnetometer, and can be connected to the host computer via a USB cable.

Fig. 1.
figure 1

Hand motion tracking system using (a) infrared cameras for position tracking and (b) inertial measurement unit for orientation tracking.

2.2 Orientation Correction Using the Gravity and Magnetic North Vectors

A gyroscope should ideally generate the output signal of zero angular velocity when it is not in motion. However, a commercial-grade gyroscope can generate a non-zero erroneous reading, called “the bias offset error”, under those circumstances. This error produces a major distortion of the orientation measurement, called “drift” [3, 19]. An algorithm to improve the orientation estimation using the gravity and magnetic North vectors (GMV) was previously proposed [12,13,14] and implemented in this evaluation. The orientation correction algorithm using the gravity and magnetic North vectors, as shown in Fig. 2, consists of 4 steps, as described below.

Fig. 2.
figure 2

Block diagram of the orientation correction algorithm using the gravity vector and the magnetic North vector correction (GMV).

(1) Prediction of the Bias Offset Error. When the sensor is static, the gyroscope in the IMU is expected to provide zero readings in all axes. However, for commercial-grade MEMS, they could generate the output signals that deviate from zero, called “bias offset error”. The bias offset error in IMU consists of deterministic and stochastic components [1], resulting in the change of bias randomly through time. Thus, the bias offset error should be re-calculated for every period of time. The new bias offset error will be estimated and updated only when the IMU is static. For real-time implementation, the bias offset of gyroscope (\(\hat{b}\)) is calculated from the mean of five consecutive gyroscope samples that have magnitudes smaller than predefined thresholds. The unbiased angular velocity (\(\varvec{\omega }_B\)) is then calculated by subtracting the calculated bias offset error (\(\hat{b}\)) from the raw gyroscope reading (\(\varvec{\omega }_0\)) as described in Eq. 1.

$$\begin{aligned} \varvec{\omega }_B = \varvec{\omega }_0 - \hat{b} \end{aligned}$$
(1)

(2) Estimation of a Quaternion Orientation. The rotation or orientation in this work is represented by quaternion notation, having its real part in the forth component. Quaternion is a 4-dimensional quantity which is a commonly used to describe orientation of an object in computer graphics because it prevents singularity and gimbal lock problem [8]. The quaternion rate (\(\dot{q}\)) in Eq. 2 can be calculated by using the unbiased angular velocity (\(\varvec{\omega }_B\)) and the quaternion estimation of the orientation from the previous iteration (\(\hat{q}_0\)). At the initial stage, \(\hat{q}_0\) is set to [0, 0, 0, 1], indicating zero-degree rotation.

$$\begin{aligned} \dot{q} = \frac{1}{2}\hat{q}_0\otimes \varvec{\omega }_B \end{aligned}$$
(2)
$$\begin{aligned} q_G = exp((\varDelta t)\dot{q}\otimes \hat{q}_0^*)\otimes \hat{q}_0 \end{aligned}$$
(3)

The quaternion \(q_G\) in Eq. 3 represents the estimated orientation, which can be calculated from the quaternion rate (\(\dot{q}\)), the quaternion estimation of the orientation from the previous iteration (\(\hat{q}_0\)), and the sampling interval used by the IMU for recording data (\(\varDelta t\)). This estimated quaternion (\(q_G\)) can be used to describe the orientation of the sensor module or the object that the IMU is attached to.

(3) Quaternion Correction. In the Earth frame, the gravity vector always points towards the Earth’s center. The estimated orientation \(q_G\) from Eq. 3, which represents the orientation of the sensor module with respect to the Earth frame, can be used to rotate the gravity vector in the Earth frame in order to obtain the calculated gravity vector (\(\varvec{a}(q_G)\)) referenced in the sensor’s body frame, as shown in Eq. 4, where \(\varvec{A}_{int}\) is the initial measured gravity vector in the Earth frame. Then, it can be used to compare with the measured gravity vector (\(\varvec{a}_0\)) from the accelerometer. If the IMU is in a static period, the accelerometer ideally measures only the acceleration due to gravity.

$$\begin{aligned} {\varvec{a}(q_G)} = {{q_G}^{*} \otimes {\varvec{A}_{int}} \otimes {q_G}} \end{aligned}$$
(4)

The error between the calculated gravity vector (\(\varvec{a}(q_G)\)) and the measured gravity vector (\(\varvec{a}_0\)) represents the error of orientation estimation and gives us the opportunity to correct the orientation estimates. The angular difference between these two vectors can be determined in the form of a quaternion denoted by \(\varDelta q_A\), as shown in Eq. 5.

$$\begin{aligned} \varDelta q_A = \mathbb {H}(\varvec{q}_{Av},q_{Aw}) \end{aligned}$$
(5)
$$\begin{aligned} \varvec{q}_{Av} = \varvec{a}_0 \times \varvec{a}(q_G) \end{aligned}$$
(6)
$$\begin{aligned} q_{Aw} = {\left\| \varvec{a}_0 \right\| } {\left\| \varvec{a}(q_G) \right\| } + \varvec{a}_0 \cdot \varvec{a}(q_G) \end{aligned}$$
(7)

To correct the orientation estimation, \(q_G\) is multiplied by \(\varDelta q_A\) since the product of 2 quaternions implies compounding of their rotations. The estimated quaternion with gravity vector correction (\(\hat{q}_{GA}\)) is described in Eq. 8. Note that \(\hat{q}_{GA}\) needs to be normalized before using it as a rotation operator.

$$\begin{aligned} \hat{q}_{GA} = q_G \otimes \varDelta q_A \end{aligned}$$
(8)

Similarly, the magnetic North vector at a certain location always points towards the Earth’s magnetic North pole in the Earth frame. The estimated orientation \(q_G\) from Eq. 3 which represents the orientation of the sensor module with respect to the Earth frame, can also be used to rotate the magnetic North vector in the Earth frame in order to obtain the calculated magnetic North vector (\(\varvec{m}(q_G)\)) referenced in the sensor’s body frame, as shown in Eq. 9, where \(\varvec{M}_{int}\) is the initial measured magnetic North vector in the Earth frame. Then, it can be used to compare with the measured magnetic North vector (\(\varvec{m}_0\)) from the magnetometer.

$$\begin{aligned} {\varvec{m}(q_G)} = {{q_G}^{*} \otimes {\varvec{M}_{int}} \otimes {q_G}} \end{aligned}$$
(9)

The error between the calculated magnetic North vector (\(\varvec{m}(q_G)\)) and the measured magnetic North vector (\(\varvec{m}_0\)) represents the error of orientation estimation and gives us a second opportunity to correct the orientation estimates. The angular difference between these two vectors can be determined in the form of a quaternion denoted by \(\varDelta q_M\), as shown in Eq. 10.

$$\begin{aligned} \varDelta q_M = \mathbb {H}(\varvec{q}_{Mv},q_{Mw}) \end{aligned}$$
(10)
$$\begin{aligned} \varvec{q}_{Mv} = \varvec{m}_0 \times \varvec{m}(q_G) \end{aligned}$$
(11)
$$\begin{aligned} q_{Mw} = {\left\| \varvec{m}_0 \right\| } {\left\| \varvec{m}(q_G) \right\| } + \varvec{m}_0 \cdot \varvec{m}(q_G) \end{aligned}$$
(12)

To correct the orientation estimation, \(q_G\) is multiplied by \(\varDelta q_M\). The estimated quaternion with magnetic North vector correction (\(\hat{q}_{GM}\)) is described in Eq. 13. Note that \(\hat{q}_{GM}\) needs to be normalized before using it as a rotation operator.

$$\begin{aligned} \hat{q}_{GM} = q_G \otimes \varDelta q_M \end{aligned}$$
(13)

(4) Adaptive Quaternion Interpolation. Adaptive quaternion interpolation is the last step to be implemented in the orientation correction algorithm, which determines the final quaternion estimate based upon the current conditions of the sensor. In some circumstances, when the IMU is moving rapidly, the accelerometer does not measure only acceleration due to gravity. Thus, the accelerometer is no longer a reliable reference for orientation correction. For the magnetometer, the rapid movement of the hand does not directly affect its measurement. Spherical Linear Interpolation (SLERP) or great arc interpolation [4] is proposed as an approach to determine the final orientation estimates (\(\hat{q}_{OUT}\)) from two quaternions by using an adaptive weight (\(\alpha \)).

$$\begin{aligned} \hat{q}_{OUT} = \frac{\hat{q}_{GM}sin((1-\alpha )\varOmega ) + \hat{q}_{GA}sin(\alpha \varOmega )}{sin(\varOmega )} \end{aligned}$$
(14)
$$\begin{aligned} \varOmega = cos^{-1}(\hat{q}_{GM} \cdot \hat{q}_{GA}) \end{aligned}$$
(15)

From Eq. 14, when \(\alpha \) equals to 1, the final estimated quaternion orientation (\(\hat{q}_{OUT}\)) is equal to the orientation estimation using only the gravity vector correction (\(\hat{q}_{GA}\)). This is appropriate, because \(\alpha = 1\) implies that the sensor is in a static period. When \(\alpha \) equals to 0, the final estimated quaternion orientation (\(\hat{q}_{OUT}\)) is equal to the orientation estimation using only the magnetic North vector correction (\(\hat{q}_{GM}\)). The adaptive weight (\(\alpha \)) is the parameter that indicates the stillness of the sensor module and can be used to linearly interpolate the two quaternions. When the sensor is in static period, the value of \(\alpha \) is equal to 1. While the sensor is in rapid motion, the value of \(\alpha \) drops towards zero.

3 Implementation

Thirty human subjects were asked to participate in the evaluation experiment. Each of the subjects wore the instrumented glove on his/her left hand and performed specific hand movement tasks. While the tasks were being performed, the marker coordinate from OptiTrack V120: Trio and the data from an inertial measurement unit attached at the back of the hand, were recorded. The orientation correction algorithm, described in Sect. 2, was implemented to calculate the estimated orientation. The orientation estimates obtained with the orientation correction algorithm using the gravity vector and magnetic North vector algorithm (GMV) were compared to orientation estimates obtained with a fixed bias offset compensation (FB), and the quaternion output from the Kalman-based orientation filtering streamed directly from the Yost Labs 3-Space sensor module (KF).

3.1 Testing Environment Setup

For this evaluation, each subject was asked to wear on his/her left hand the glove with an inertial measurement unit attached at the back of the hand. The subject was sitting on a chair, having his/her face looking towards a computer screen in which the OptiTrack V120: Trio was installed (sensor bar placed above the screen). Between the subject and the computer screen, a rectangular frame with referenced position markers was placed with a fixed distance from the computer screen. The testing environment was set up as shown in Fig. 3.

Fig. 3.
figure 3

Hand motion tracking system testing environment setup.

Fig. 4.
figure 4

The sequence of the 3D hand model movement (poses 1 to 10).

3.2 Virtual 3D Environment

A virtual 3D environment was created with Unity, for this evaluation. The virtual 3D environment consists of a 3D hand model and the rectangular frame with referenced position markers. A C# script was written and attached to the 3D hand model so that a sequence of 10 pre-defined movements of the 3D hand model were visualized as the hand movement guide for the subjects to perform the tasks. A sequence of 10 pre-defined movements of the 3D hand model were visualized (as shown in Fig. 4), and each subject was asked to replicate the movements with his/her left hand, wearing the instrumented glove, trying to match in each pose the position and orientation of the virtual hand shown on the screen. The position and orientation estimates from the system were recorded with the 3 orientation correction methods: FB, GMV and KF being applied in parallel fashion.

3.3 Experiment Procedure

  1. 1.

    Each subject was asked to wear the glove on his/her left hand and put the hand into the initial position and orientation. (Pose 1 in Fig. 4)

  2. 2.

    The experimenter clicked on the button labeled as “Mark this position and orientation” to record the initial position and orientation of the hand.

  3. 3.

    The experimenter clicked on the button labeled as “Show hand movement” located on the bottom right corner of the screen. The 3D hand model then rotated and translated to the next state (pose) of the hand movement sequence.

  4. 4.

    The subject moved his/her hand to match the position and orientation as shown by the movement guide on the screen.

  5. 5.

    The experimenter clicked on the button labeled as “Mark this position and orientation” to record the current position and orientation of the subject’s hand.

  6. 6.

    Steps 3 to 5 of this procedure were repeated until all 10 states or poses of movement of the 3D hand model had been performed by the subject.

  7. 7.

    The subject was asked to remove the glove and answer a questionnaire inquiring about gender, age, and his/her dominant hand.

4 Results and Discussion

A total of 30 test subjects voluntarily participated in our experiment, which had been previously approved by FIU IRB board. There were 10 female participants and 20 male participants with their ages ranging from 19 to 55 years old (26.53 years old on average). All participants were healthy and able to move their hands without any difficulties. Only one participant was left-handed, the remaining 29 participants had the right hand as their dominant hand. Position and orientation errors were calculated considering the physical measurements and angles in the guiding frame as “ground truth”.

4.1 Position Error Analyses

In our experiment, each subject moved his/her hand and placed it at 10 prescribed states or poses. The positions of the subjects’ hands in three-dimensional space were recorded. A total of 300 rows of data (30 test subjects \(\times \) 10 states of hand movement) were statistically analyzed using SPSS. The descriptive statistics for the errors in position tracking are shown in Table 1. The position error in the x-axis is 1.7 cm on average, the mean of position error in the y-axis is 1.0 cm, and 3.5 cm for the z-axis. The estimated marginal means of the position errors in x, y and z are shown in Figs. 5, 6 and 7. Notice that the means of position errors for the 4th, 8th and 9th states or poses in all axes are relatively higher than other states in the movement sequence. This could be because of the less natural hand movements required for those states, which caused difficulties for the test subjects in trying to exactly match the pre-defined positions of the hand that were requested.

Table 1. Descriptive statistics for errors in position tracking (in meter)
Fig. 5.
figure 5

Estimated marginal means of the position errors in x (m).

Fig. 6.
figure 6

Estimated marginal means of the position errors in y (m).

Fig. 7.
figure 7

Estimated marginal means of the position errors in z (m).

4.2 Orientation Error Analyses

In our experiment, each subject moved his/her hand through a sequence of 10 hand poses, or states, while three different orientation correction algorithms were running simultaneously to estimate the orientations. The purpose of this experiment is to evaluate the effects of three different orientation correction algorithms on the orientation output errors. The orientation errors were calculated under the assumption that the subjects oriented their hands exactly as required in each pose, aided by the frame provided. The orientation output errors in the form of Euler Angles (Phi, Theta and Psi) were calculated for the three orientation correction algorithms which are: (1) The orientation correction using fixed bias offset (FB), (2) The correction using the Kalman-based orientation filtering streamed directly from the Yost Labs 3-Space sensor module (KF), and (3) The proposed orientation correction using gravity and magnetic North vector (GMV).

A total of 900 rows of data (30 test subjects \(\times \) 10 states of hand movement \(\times \) 3 algorithms) were recorded and statistically analyzed using SPSS. Table 2 shows the estimated means of the orientation errors in all Euler angles. Notice that the means of the orientation errors for GMV and KF are less than those for FB in every Euler angle. The means of the orientation errors for GMV are similar to the means of the orientation errors for KF in every Euler angle. The estimated marginal means of the orientation errors for Phi, Theta and Psi are shown in Figs. 8, 9 and 10, respectively.

Table 2. Estimated means of the orientation output errors (in degree)
Table 3. Tests of normality
Table 4. Levene’s test of equality of error variances
Fig. 8.
figure 8

Estimated marginal means of the orientation errors for Phi (deg).

Fig. 9.
figure 9

Estimated marginal means of the orientation errors for Theta (deg).

Fig. 10.
figure 10

Estimated marginal means of the orientation errors for Psi (deg).

To test for the effects of three algorithms on the orientation output errors, a multivariate analysis of variance (MANOVA) was initially suggested. Before performing an analysis of variance, the data has to be validated on two assumptions, which are normality of the error and equal variances across treatments. The normality test is the test for the null hypothesis that the data are normally distributed within each treatment group. Table 3 shows the results of the test statistics Kolmogorov-Smirnov and Shapiro-Wilk, having the p-values for testing normality of 0.000 (the null hypothesis is rejected), which indicates strong evidence that the orientation output errors are not normally distributed. Table 4 shows the results of the test statistics for the null hypothesis that the error variance of the dependent variable is equal across treatment groups. The p-values of the test of homogeneity of variances are 0.000 (the null hypothesis is rejected) for all three dependent variables (Phi, Theta and Psi), indicating strong evidence that the error variances are not equal among the three treatment groups (three algorithms). Since the normality and the homogeneity of variance assumptions are not met, the multivariate analysis of variance (MANOVA) cannot be used as a statistical test model on these data.

In this situation, the Kruskal-Wallis H test is suggested as a nonparametric alternative to the usual analysis of variance [11]. The Kruskal-Wallis test is a rank-based nonparametric test, which is used to determine the statistical significance of the differences of a dependent variable across two or more treatment groups [5]. Each dependent variable (Phi, Theta and Psi) was tested with Kruskal-Wallis on a 0.05 level of significance to determine if there is a difference in means across the three algorithms. Table 5 shows the test statistics for the orientation output errors in the Euler angles, across three algorithms, in which the null hypothesis is that the distribution of orientation errors in an Euler angle is the same across the three algorithms.

The result for Phi in Table 5 indicates that there is a statistically significant difference between the orientation errors in Phi produced by different algorithms (\(H(2)=80.773, p=0.000\)), with a mean rank of 560.48 for FB, 400.60 for GMV and 390.43 for KF. In pairwise comparisonsFootnote 1 among the three algorithms, shown in Table 6, the results for Phi indicate that there are statistically significant differences of the orientation errors in Phi between KF and FB (\(p=0.000\)) and between GMV and FB (\(p=0.000\)). There is no statistically significant difference of the orientation errors in Phi between KF and GMV (\(p=0.632\)).

The result for Theta in Table 5 indicates that there is a statistically significant difference between the orientation errors in Theta produced by different algorithms (\(H(2)=89.439, p=0.000\)), with a mean rank of 565.57 for FB, 404.86 for GMV and 381.06 for KF. In pairwise comparisons among the three algorithms, shown in Table 6, the results for Theta indicate that there are statistically significant differences of the orientation output errors in Theta between KF and FB (\(p=0.000\)) and between GMV and FB (\(p=0.000\)). There is no statistically significant difference of the orientation errors in Theta between KF and GMV (\(p=0.262\)).

The result for Psi in Table 5 indicates that there is a statistically significant difference between the orientation errors in Psi produced by different algorithms (\(H(2)=89.528, p=0.000\)), with a mean rank of 566.37 for FB, 396.26 for GMV and 388.87 for KF. In pairwise comparisons among the three algorithms, shown in Table 6, the results for Psi indicate that there are statistically significant differences of the orientation errors in Psi between KF and FB (\(p=0.000\)) and between GMV and FB (\(p=0.000\)). There is no statistically significant difference of the orientation errors in Psi between KF and GMV (\(p=0.728\)).

Table 5. Kruskal-Wallis test for orientation errors in the Euler angles
Table 6. Pairwise comparison among three algorithms for orientation errors in the Euler angles.

Figures 8, 9 and 10 show the estimated marginal means of the orientation errors found using the 3 correction methods. The errors in Euler angles for the orientation correction algorithm using the gravity vector and magnetic North vector (GMV) are similar to the errors in Euler angles for the on-board Kalman-based orientation filtering (KF), for every hand movement in the sequence. The large amount of orientation errors from both GMV and KF in some poses could be caused by the difficulty experienced by the subjects in trying to exactly match the pre-defined orientations of the hand requested on the computer screen. It is possible that the amount of orientation errors for both algorithms could in fact be smaller, and both algorithms could estimate the actual orientation of the hand.

5 Conclusion

The statistical analyses show that the orientation correction algorithm using gravity vector and magnetic North vector can significantly reduce the errors in orientation tracking when comparing to the orientation correction algorithm using fixed bias offset correction. The orientation correction algorithm using gravity vector and magnetic North vector and the on-board Kalman-based orientation filtering produced orientation errors that were not significantly different in Phi, Theta and Psi. The development of hand motion tracking systems using IMUs and infrared cameras can be a significant contribution to the improvement in the realism of natural human-computer interactions within a 3D virtual environment.