Keywords

1 Introduction

It was reported recently that there are 34.8 million individuals with severe disabilities in the U.S. [1]. A severe disability can be described as a disability that limits one or more functional capabilities of individuals, such as mobility, self-care, or employment [2]. Individuals with severe disabilities face difficulties in employment for various reasons, such as difficulty in performing some tasks due to limited abilities, predetermined or subconscious biases of employers about the possible challenges they may face, resulting in avoiding such individuals altogether [3]. It is intuitive that the most effective job training can be gained at the physical job site. However, due to the reasons mentioned above, many employers would hesitate to accept giving job training to individuals with severe disabilities at such sites, without prior training that is enough to handle the equipment at these places. This may create dangerous situations for both the trainee and the workers as injuries due to misuse of mechanical equipment may occur. With this motivation, we believe that virtual reality (VR) can be a strong alternative to job site training. Virtual reality offers many advantages over direct job site training that both speaks to the characteristics of specific disability groups and eliminates the possible hazardous outcomes of the physical job site training for them. These advantages can be summarized as follows: safe training in a controlled environment, gradual increase in level complexity, customizable virtual scenarios, real time feedback, prompts and distractions, repetitive automated training, no time constraint on training, automated data collection, focusing on the performed task by isolation from the surroundings, no severe consequences for mistakes, system scalability, automated assessment and reporting, reduced transportation costs, and overall low cost training due to the virtual reality systems becoming more affordable in recent years [4, 5].

With these in mind, we proposed an advanced immersive virtual reality system we call ‘VR4VR’ that aims to train and assess individuals with severe disabilities on vocational skills. The VR4VR project was funded by the Florida Department of Education. Immersive virtual reality can be described as a system that makes the user feel like they have stepped into the virtual world. This can be achieved by a few ways: incorporating motion of the user into interaction via real-time motion tracking, providing a head mounted display that renders the virtual environment based on user’s head movements, and using seamless projections on large displays. The VR4VR caters for three disability groups: autism spectrum disorder (ASD), traumatic brain injury (TBI) and severe mobility impairment, such as spinal cord injury (SCI), and is composed two main components: cognitive disabilities training system and physical disabilities training system. This paper focuses on the physical disabilities training system which caters for the severe mobility impairment population. The physical disabilities modules offer training on manipulating an assistive physical robot to move and manipulate boxes or smaller objects (camera control, base motion, single arm, dual arm, and gripper operations). Main components of the VR4VR system’s physical disabilities modules are: Baxter physical robot, Razer Hydra user controller, virtual replicas of the Baxter robot and the Razer Hydra controller, wireless remote-control panel, and a large screen. To train individuals with severe physical disabilities on using an assistive robot, several modules were implemented. These modules included tasks that aim to teach users how to use the Baxter/PowerBot assistive robot for manipulating and moving objects. Users were first trained in the virtual reality system and they then performed the same tasks using the physical robot.

This paper presents and discusses the user study results of 15 individuals (10 neurotypical, 5 severe physical disabilities) who used the VR4VR system’s physical disabilities modules. Challenges faced during the design, development and user study phases and implications of the user study results are also discussed, which we believe will benefit future virtual reality studies for vocational rehabilitation. Finally, future research directions are presented.

2 Related Work

Using virtual reality for vocational training of individuals with disabilities have become an emerging area in recent years, due to the advantages it offers and the prevalence of low cost new generation virtual reality systems. In this section, we present the key previous works in the area of using virtual reality for vocational training of adults with disabilities.

Smith et al. assessed the feasibility and efficacy of a virtual reality job interview training system for individuals with Autism Spectrum Disorder (ASD) [6]. The system included job interview simulations with a virtual character. Users who were trained with the virtual reality system showed greater improvement than the traditionally trained users. In addition, users found the virtual reality system enjoyable and easy to use. The authors concluded that the results indicated evidence on the system’s feasibility and efficacy. A follow-up study revealed that the participants who trained with the system had increased chances of receiving job offers in the next 6 months [7]. Wade et al. developed a virtual reality driving training system for individuals with ASD [8]. The system utilized gaze information of users to adaptively change the virtual environment accordingly. User study results indicated that the system was beneficial in training users on driving skills. Tsang and Man proposed a virtual reality vocational training system for individuals with schizophrenia and measured its effectiveness [9]. Results indicated that the performance of individuals who were trained with the virtual reality system was improved more than the individuals who were trained by the conventional methods. Virtual reality training was more effective in improving the individuals’ self-efficacy as well. The authors indicated that virtual reality was an effective tool in vocational training of individuals with schizophrenia. Yu et al. proposed a virtual reality system for training hearing impaired individuals on CNC machine operation skills [10]. The usability test results were promising in terms of effective training. The system is currently under iteration and is planned to be evaluated with a user study in the future.

As our VR4VR system is compared to the previous works in the emerging area of vocational rehabilitation using virtual reality, the following main differences can be listed: (1) utilizing and seamlessly integrating several immersive components such as motion tracking, head mounted display, curtain display, tangible object interaction, a haptic device and an assistive robot, (2) Offering training in a wide range of vocational skills, (3) Catering to three main disability groups.

3 The VR4VR System

Our VR4VR system is composed of several components. In this section, we present the design and implementation of the main components of the system and parts that are specific to the physical disabilities training system.

3.1 Hardware

Several hardware components were used in the VR4VR system. In the physical disabilities training system, a large 50ʺ TV was used as the display. Baxter robot was used [16] with PowerBot mobile platform [17]. Razer hydra controllers were used for controlling the physical and virtual replica of the robot [18]. Software was custom developed with the Unity game engine [19] and C# programming language.

3.2 Physical Disabilities Training System

To train individuals with severe physical disabilities on using an assistive robot, several modules were implemented. These modules included tasks that aim to teach the users using the Baxter/PowerBot assistive robot on manipulating and moving objects. The users were first trained in the virtual reality system and then performed the tasks using the physical robot. The modules were designed as follows: (1) camera control module that teaches the user how to switch between the cameras and how to perform basic functions such as zooming and panning, (2) base control module that teaches the user how to move the PowerBot base platform of the robot around inside the virtual environment, (3) using the controls taught in the previous two modules in tandem in order to navigate around a cluttered environment, (4) arm control module that teaches the user how to use the two included hand tracking motion controllers in order to control the robot’s hands and arms, (5) a module that combines all of the skills learned in the previous modules into one cohesive test of matching both the base platform and hand configuration of a semitransparent robot silhouette in the virtual environment, (6) gripper operations module that teaches the user how to use the motion controllers to control the operation of the robot’s dual parallel grippers, (7) dual arm control module that teaches the user how to move both hands on the robot simultaneously, using only one motion controller, and (8) a test module for the dual arm control system that requests the user to retrieve an item from within a warehouse environment and bring it to the delivery area located at the front of the warehouse. The Physical disabilities training system is presented in Fig. 1.

Fig. 1.
figure 1

Physical skills training modules of the VR4VR system. Left: The physical robot. Right: The virtual replica of the physical robot in a virtual warehouse environment.

Completing the components of a task would raise the user’s score, to a maximum of 100 points. Failing to complete a component within the time constraints, dropping an object, or positioning it in an incorrect location would result in that component being skipped and those points lost. Colliding with objects in the robot’s workspace, including other objects, tables and shelves would result in a deduction of 5 points. Colliding with environmental objects, such as walls, doors, and other static objects would result in a deduction of 10 points. The following equation shows how scores were calculated, where γ denotes the progress of the user (out of 100), α denotes the number of scene object collisions, and β denotes the number of environment object collisions: Score = γ − (α(30) + β(20)).

The controls for the physical disabilities modules were the same when using the physical robot and the virtual robot. A special control program retrieved the current state information from either the virtual robot running inside of the VR4VR Simulation, or the physical robot. The input from the user’s controller was sent to the control program and were used in conjunction with the state information to generate the motion of the robot’s base and arms. The control program would then execute these motions on either the virtual robot or the physical robot. Finally, the user was presented either the live camera feed from the physical robot’s cameras, or the virtual camera feed from the virtual robot’s cameras within the simulation. The virtual robot had been modeled after the physical robot and emulated the physical robot’s specifications. An overview of the control system can be seen in Fig. 2.

Fig. 2.
figure 2

Overview of the physical disabilities training system showing the integration of the controls with the virtual robot and the physical robot.

4 User Study

A user study was performed for the physical disabilities training system with a total of 15 participants (10 neurotypical, 5 severe physical disabilities). All users were older than 18 years old with a mean age of 26.88. None of the participants had prior virtual reality experience. Participants with disabilities were clients of the Florida Vocational Rehabilitation (VR) program and were job seekers. Professional job trainers accompanied the participants with disabilities during the testing sessions. The testing took two and a half hours in total per participant: one-hour virtual reality training module testing, 15 min break time and survey filling, one-hour physical robot testing, followed by survey filling. The user study was performed under the IRB #Pro00013008.

5 Results

Level scores for the physical disabilities training modules are presented in Fig. 3 for the virtual reality training and in Fig. 4 for the physical robot used for neurotypical individuals and individuals with physical disabilities. Level scores were out of 100 with possible deductions from the following: collisions with the environmental props and walls, and dropping items onto the floor.

Fig. 3.
figure 3

Level scores for the physical disabilities training modules virtual reality training.

Fig. 4.
figure 4

Level scores for the physical disabilities training modules physical robot use.

Ease of interaction scores for the physical disabilities training modules are presented in Fig. 5. These scores were out of 5 with answers ranging on a 5-point Likert scale. For the Physical Disability training module, when asked the question “Would you come back to train with us again?” 14 of the participants answered “Yes.”, and 1 participant answered “No.”

Fig. 5.
figure 5

Average ease of use scores for the physical disabilities training modules.

As the job trainers were interviewed about the physical disabilities training modules, they stated that the virtual reality training would be beneficial for the job seekers. However, they indicated that the training would be challenging for individuals with limited gripping abilities due to the joystick controllers used in the module. They also emphasized that the training time in the virtual reality module was not sufficient to prepare the users for using the physical robot. They suggested repeating the virtual reality training module at least three times before letting the users operate the physical robot.

6 Discussion

The results for the physical disabilities training modules were lower than expected, especially for the arm control and gripper. Ease of use scores were also low for these two components. The participants gave lower scores for the ease of use of the virtual reality module than the physical robot. As we investigated the possible reasons behind this by interviewing the participants and the job trainers, they stated that it was easier to operate the robot when they saw it in the real world, instead of seeing the virtual robot on a monitor through cameras. The additional camera manipulation in the virtual reality training was found difficult to operate by the users. In addition to this, the users stated that understanding the depth was more challenging in the virtual reality training as compared to seeing the physical robot in the real world. To remedy these, camera controls will be automatized and more cues to help in understanding the depth will be added.

The participants stated that the sensitivity of the controls, the large number of controls and comprehending the orientation of the robot’s components took time to get used to. The most common problem was in the gripping of the objects. It required the users to orient the arm accurately and grasp small objects such as a water bottle. The users had difficulty in fine tuning the motion and some users were confused about remembering the controls.

Another reason behind the low scores of the participants with physical disabilities was the secondary cognitive disabilities, although not severe, which were existing in some of the participants. The instructions of the physical disabilities modules were not prepared to accommodate for the cognitive disabilities, hence it was observed to be a bit overwhelming for these participants to comprehend the controls in the allocated time.

For both user groups, the comprehension needed in order to fully operate the robotic platform both in virtual reality and on the physical robot was an issue. Since there was no direct mapping between the users’ motion and the corresponding robot’s motion, users had to adapt their motion through the controller to achieve the desired robot motion. This was observed to be counterintuitive for some participants, therefore additional training sessions and more in-depth explanation of the control system and its’ limitations need to be added in order to facilitate easier comprehension.

Another challenge was maintaining the accuracy of the virtual test environment through the physical robot test environment. One of the problems encountered was in the object grasping task. In this task, the users needed to grasp a water bottle. However, the water bottles in the virtual environment were more difficult to knock over than the water bottles in the real environment. This was observed to make this particular task much more difficult to perform on the physical robot.

Even though they weren’t completely comfortable in controlling the physical robot, most of the participants were pleased to be able to control an advanced physical robot with an hour of virtual reality training. Most of the participants stated that more training time would prepare them better for controlling the physical robot.

Overall, although virtual reality training partially prepared the users to operate the physical robot, they gained a baseline understanding and were able to perform simple tasks with the physical robot which they had no prior training to operate. We think that this makes virtual reality a platform that is worth further exploration in vocational training of individuals with physical disabilities.

7 Conclusion

In this paper, we presented the physical disabilities component of the VR4VR system that aims to train and assess individuals with severe disabilities on vocational tasks. A total of 15 individuals (neurotypical and with physical disabilities) participated in testing the system with accompanying professional job trainers. The results encouraged that virtual reality is a promising tool in vocational training of individuals with physical disabilities. Future work will include iterating the system according to the feedback from the participants and the job trainers and perform a second user study with more participants.