1 Introduction

When referring to Autism spectrum disorder (ASD), people always focus on the core deficits of ASD in social communication and interaction [1]. However, the high prevalence of motor impairments and delays among ASD population cannot be overlooked [2]. Previous studies have noted several motor abnormalities of children with ASD, including atypical gait, postural control and upper limb movements [3, 4]. In particular, atypical fine motor control, such as the abnormalities in eye hand coordination, grasping and reaching, and less accurate manual dexterity, is found among children with ASD [5, 6].

Early diagnosis and early enrollment in ASD intervention can offer many benefits, including increase treatment outcomes, earlier educational planning, and appropriate resources for health improvement [7]. However, ASD diagnosis is a difficult and complex task due to the wide range of symptoms involved. Currently, ASD diagnosis relies on the clinical evaluation of autism-specific behaviors via standardized interviews, observations and questionnaires, which is often time- and resource-consuming [8]. As a growing number of studies evidence the existence of atypical motor patterns in children with ASD, understanding and exploring the motor signatures in children with ASD provide a new methodology to facilitate early ASD diagnosis [9].

Computer-aided systems (e.g., robots and tablets) have been increasingly applied in ASD intervention taking advantage of providing an engaging intervention environment as well as recording objective and quantitative performance data [10,11,12]. The equipment with specific sensors to detect and measure motor information is easily accessible in recent days. Instead of using paper materials for motor function assessment, such as Beery VMI [13] and Mullen Scales [14], motor tasks in the form of computer tasks/games are more likely to be accepted among children with ASD, and are able to provide computational measures enabling the exploration of motor patterns using pattern recognition approaches.

The preliminary study presented in this paper aimed at investigating whether fine motor information is useful to identify children with ASD, and to look for the most discriminant fine motor features that distinguish the children with ASD from TD children. In this study, we employed a novel Haptic-Gripper virtual reality system, which provided fine motor tasks that required grip, hold, reach and touch manipulations from the participants, and recorded the manipulation data, such as grip force and movement location. We recruited six children with ASD and six TD children for this study, each of whom independently performed eight fine motor tasks with minimal disturbance. The recorded data from these tasks were then analyzed using several well-known machine learning approaches to obtain the fine motor patterns that could discriminate children with ASD from their TD peers. We expected that properly trained classification models based on fine motor patterns could be a practical predictor for the future ASD diagnosis.

The paper is organized as follows. Section 2 provides the details of the Haptic-Gripper virtual reality system. Section 3 describes the methods of data acquisition and data analysis. Section 4 presents the study results. Finally, in Sect. 5, we summarize the contributions of the current work along with its limitations and future potential.

2 The Haptic-Gripper Virtual Reality System

The Haptic-Gripper virtual reality System is an interactive system that we developed from a prototype used in a previous project [15] to provide virtual hand manipulation tasks for fine motor skill training of children with ASD. The system is able to conveniently set up at any place with computers, and allows the users to perform tasks under more comfortable and natural conditions that might provide more spontaneous and reliable data. From the feedback of the previous project, the children with ASD showed great interest in such a haptic system and were found to be engaged in the virtual tasks. We thus used this Haptic-Gripper virtual reality system in this study to provide virtual fine motor tasks by adapting the gripper size and redesigning the virtual tasks for the target participants. The Haptic-Gripper virtual reality system allows users to manipulate (e.g., move, grip and feel) objects in the virtual tasks by using a customized tool, named Haptic Gripper (see Fig. 1), and simultaneously records quantitative data regarding the users’ manipulative behaviors (e.g., grip force and hand location).

Fig. 1.
figure 1

A child was performing the Task 8 using the Haptic Gripper in the Haptic-Gripper virtual reality system.

In this study, we used this system to acquire fine motor data of participants from eight virtual fine motor tasks (see Fig. 1). In each task, the participant controlled two balls in a group (moving in parallel along the paths), and tried to make them go through the paths without hitting the walls, and to reach the two sets of targets at the both ends of the paths. At the beginning of the task, the grouped balls appeared at one end of the paths. Participants were asked to first touch the targets near the other end of the paths, and then go back through the paths to touch the targets near the start point. These tasks provided haptic feedback to improve the sense of immersion. For instance, if the balls collide with the walls, the participant would feel the resistance and friction from the walls.

All eight tasks required movement manipulation that controlled the location of the grouped balls, and grip manipulation that adjusted the inner distance of the grouped balls to fit the location of the paths. The Haptic Gripper was the tool for the participants to manipulate the grouped balls in the tasks. It was implemented by augmenting a Geomagic Touch Haptic Device [16] with a 3D-printed gripper consisting of force sensing resistors (FSRs) from Interlink Electronics [17]. Thus it was able to detect the gripper location and map the location data to those of the controlled balls, as well as to detect the grip force and use the force data to adjust the inner distance of the grouped balls with the following pre-defined logic:

$$ Balls\_Distance = \left\{ {\begin{array}{*{20}c} {small\_Distance} & {f \in [0N,2.96N)} \\ {medium\_Distance} & {f \in [2.96N,\,5.5N)} \\ {larg e\_Distance} & {f \in [5.5N,\,\infty) } \\ \end{array} } \right. $$
(1)

All tasks required participants to grip with force within medium range i.e., [2.96N, 5.5N) in order to pass the parallel paths with fixed medium distance. This force range was determined based on the grip force of Grade 4 children used in handwriting [18].

Figure 1 illustrates the use of the Haptic-Gripper virtual reality system and the task types we used in this study. A child grabbed the gripper and put her thumb and index finger on the press plates. She moved the gripper and applied appropriate grip force to move the grouped balls through the white paths to touch the targets and get rewards. Eight tasks with paths of different shapes were prepared in the task library and were loaded automatically after one task was completed.

3 Method

The purpose of this study was to investigate whether virtual tasks that involved fine motor activities could contribute to the classification of children with ASD from their TD peers using machine learning approaches. The study was approved by the Institutional Review Board of Vanderbilt University.

3.1 Participants

We recruited six participants with ASD and six TD participants through a research registry of the autism center of Vanderbilt University. The participants with ASD had confirmed diagnoses by a clinician based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) criteria [19]. The Stanford-Binet Intelligence Scales, Fifth edition (SB-5) [20] was employed to measure the intellectual functioning (IQ) of the participants, while the Social Responsiveness Scale, Second Edition (SRS-2) [21] was completed by participants’ parents to index the ASD symptoms of their children. The participants’ characteristics are shown in Table 1. From the table, we can see that participants in ASD group and TD group were similar regarding IQ and age.

Table 1. Participants’ characteristics.

3.2 Procedure

The experimenters first introduced the Haptic-Gripper virtual reality system to the participants and explained how to perform the tasks using the Haptic Gripper. Because none of participants had used the haptic device before, they were allowed to participate in practice tasks with instructions from the experimenters to get familiarized with the system. Next, the participants started the tasks on their own from Task1 to Task8 in that order, and were allowed to take a rest between tasks.

3.3 Feature Extraction

The Haptic-Gripper virtual reality system collected four types of performance metrics at 50 Hz. These metrics included:

  • Task Duration: the time one participant took for completing one task.

  • Hit Number: the times of the grouped balls hitting the walls.

  • Grip Force: a set of force data one participant applied in one task.

  • Location: a set of location data of the grouped balls in one task.

From the Location data, we derived the Speed metric that were a set of movement speed data of the grouped balls in one task. We also generated the “Root-Mean-Square Error (RMSE)” metric to indicate the motion stability, which measured the distance between the location of the ball and the center of the path. In addition, by dividing each task into two sub-processes: (1) go forward through the paths (GO process), and (2) go backward through the paths (BACK process), we derived sub-metrics for each sub-process. For instance, “D_GO” represented the duration of completing the “go forward” sub-process, while “D_BACK” for the “go backward” sub-process.

In this study, we only considered time domain features that were frequently employed in the activity recognition literature [22]. For each type of metric, we extracted several features. We also combined some features when appropriate. Table 2 includes the final chosen performance metrics and corresponding features extracted from these metrics. For Task Duration, Hit Number and RMSE, except for the original data, we extracted the “difference” feature, which was defined as the difference between the BACK process data and GO process data. As for the Grip Force and Speed, we extracted more features, such as the mean, median, standard deviation, etc.

Table 2. Performance metrics and features.

3.4 Feature Selection and Classification

The number of the overall extracted features was 59. We first ranked all features by F-values using one-way analysis of variance (ANOVA) test [23], and generated a feature list that arranged all features in descending order of F-values. Since some features were redundant to improve the accuracy of the classification model, we reduced the highly-related features (correlation > 0.9) to a single feature with a higher F-value, and finally obtained 35 features. To select the most discriminative features, we conducted the model training and evaluation on a subset of the feature list, where the subset size increased from 1 to 35. The feature subset was constructed by iteratively inserting a feature from the top of the feature list to the feature subset.

We trained six classifiers (Table 3) on our dataset (96 samples), and used the leave-one-subject-out cross validation for model evaluation. The classification accuracy was evaluated using F 1 score, which considered the precision (P) and the recall (R) of the classification results. The F 1 score for each model was computed as:

$$ \begin{array}{*{20}c} {F_{1} = 2 \times \frac{P \times R}{P + R},} \\ {P = \frac{TP}{TP + FP},R = \frac{TP}{TP + FN},} \\ \end{array} $$
(2)

where TP was the number of ASD samples that were correctly classified as the member of ASD group, FP was the number of TD samples that were incorrectly classified as the member of ASD group, and FN was the number of ASD samples that were incorrectly classified as the member of TD group.

Table 3. Classification methods.

4 Results

Figure 2 shows the classification results of six classifiers with respect to the number of features. Table 4 lists the maximum scores for each classifier. The results indicated that all classifiers can achieve maximum accuracies within 67–80% by considering appropriate features. The k-NN and ANN classifiers had the best performance with the maximum accuracy of 80%, when the k-NN classifier used the top one feature (Mean of Grip Force during the BACK process) and the ANN classifier used the top six features. The Naïve Bayes and the Random Forest classifiers had the second best performance with the maximum accuracy of 75%, when they separately used the top 5 and 11 features. The SVM and Decision Trees classifiers had lower accuracies of 71% and 67% respectively.

Fig. 2.
figure 2

Classification results for six classifiers with respect to the number of features.

Table 4. Classification results evaluated by F1 score.

According to the results of the ANOVA test, the top 10 features (all with p < .05) are related to the Grip Force (six features) and Speed data (four features). It suggested that Grip Force and Speed data provided much information for improving the classification of participants with ASD. The boxplots of the top 10 features (see Fig. 3) indicated that the ASD group applied significantly smaller Grip Force than the TD group (BACK_Fmean, Fmedian, GO_Fmedian). During the BACK process, the TD group increased much more Grip Force than the ASD group (DIFF_Fmean, DIFF_Fmedian), and reduced the variability of Grip Force much more than the ASD group (DIFF_Fmeanmad). In addition, during the GO process, the ASD group had greater Speed than the TD group (GO_Vmean), and had a lower kurtosis of Speed (GO_Vkurtosis). During the BACK process, the TD group reduced the COV of Speed much more than the ASD group (DIFF_Vcov). During the whole process, the ASD group had greater variability of Speed than the TD group (Vmeanmad).

Fig. 3.
figure 3

Boxplots of the top ten features ranked by F-values using the ANOVA test.

5 Conclusion

In this paper, we present a preliminary study evaluating the effectiveness of using fine motor information for ASD identification, and achieved up to 80% accuracy using machine learning approaches. The results revealed the differential fine motor patterns were related to the grip force and movement speed of children with ASD, supporting the notion that motor differences can be a predictor for ASD diagnosis. It is worth noting that the Grip Force data that have not been sufficiently analyzed in existing studies were shown to contain most useful information to improve classification accuracy. The study also offered insights on the potential of computer-aided systems for ASD diagnosis and intervention, which offer benefits in recording objective measures and providing an engaging, low-cost and non-intrusive environment.

While the results of our study is encouraging, it is limited due to small sample size and feature types. The follow-up work will include recruiting a larger population with younger participants, developing more virtual fine motor tasks with different levels of difficulties, and integrating more features to improve the classification performance. This area of research has not yet been explored in depth and it is possible to combine tasks and features from several studies to develop a more comprehensive classification test in the future.