1 Introduction

1.1 Background

Anatomy education is fundamental in life science and health education as well as visual studies. Students may use many study aids including diagrams, illustrations, animations, and 3D graphics (Albanese 2010). The current learning tools for anatomy can be enhanced by using technological innovations like virtual reality and augmented reality. Researchers have developed virtual reality and augmented reality for anatomy education. Seo et al. created ARnatomy and Anatomy Builder VR: ARnatomy aims to integrate a tangible user interface and augmented reality by using dog bones to control the display of information on a mobile device such as a smartphone or tablet (Seo et al. 2014). Anatomy Builder VR examines how a virtual reality system can support embodied learning in anatomy education. The backbone of the project is to pursue an alternative constructivist pedagogical model for learning canine anatomy. Direct manipulations in the program allow learners to interact with either individual bones or groups of bones, to determine their viewing orientation and to control the pace of the content manipulation.

This project is one branch of Anatomy Builder VR. It is a virtual reality application of a musculoskeletal thoracic limb model that supports students’ understanding of form, function, and movement. We have formulated three hypotheses in conjunction with ones found during initial research to address the three major factors of the experiment: spatial visualization, quality of muscle representation, and interactivity.

1.2 Spatial Visualization

Spatial Visualization is the ability to mentally manipulate an object in three dimensions. Based on prior research, students with low spatial visualization ability can share similar memory retention to high spatial visualization students via dynamic visualization learning methods. This is called the compensating hypothesis (Berney et al. 2015). Spatial visualization was examined in a recent study concerning the effectiveness in methods of problem-solving strategies and was shown to be the strongest indicator in visuospatial anatomy comprehension, or in other words, visualizing the movement of the canine thoracic limb in VR enhances memory retention (Nguyen et al. 2016). Spatial awareness has been observed to boost understanding of anatomy education and using virtual reality as a physicality-based immersive application, learners can take advantage of spatial visualization (Lu et al. 2017). With active learning the promotion of mental manipulation and interaction leads to higher cognitive engagement creating a more suitable learning experience. Our application provides a 3D space that a user can walk around and manipulate components to support to spatial visualization.

1.3 Muscle Representation

We provide two muscle display modes: realistic mode and symbolic mode. Both muscle representations offer similar movement and function, with the symbolic muscle setup displaying a simplified visual representation however, the realistic form was the deciding factor on memory retention. Using the virtual canine thoracic limb is an example of artificial implementation where we rely on recreating a real form to convey a better understanding. Creating the spatial connection while immersed in the VR environment induces a perceptual outlook forcing the brain to evaluate the scene in which there is a direct promotion of spatial learning. There is a direct correlation between the sense of ownership, or sense that one’s own body is the source of sensations, and the representation of the virtual model where there is an increase in ownership as the model more closely resembles its actual form (Argelaguet et al. 2016).

1.4 Interactive System

Based on our own preconceptions combined with a study done on interactivity and conceptual learning in virtual reality, we find that the interactive VR experience shows better results. Interactivity provides a truly immersive experience offering more connection to the study itself and promoting a higher learning setup. The study found that interactive VR aids children in problem solving, but the non-interactive version seemed to support greater indications of conceptual change (Roussou et al. 2006). When completing a cognitive based task, a user will have a higher advantage being exposed to a dynamic learning environment where they have the ability to manipulate objects for themselves based on their experience, prior knowledge. Direct manipulation, immersion, and interaction are some of the most important aspects to learn 3D anatomical info as it gives students a clear visual and physical understanding of form and spatial relationship.

2 Canine VR Muscle Simulation Room

2.1 Setup and Creation

The final application include one set of hyper realistic dog bones from the thoracic limb, two different representations of the biceps/triceps, a functional model stand that could include interactive buttons, and a lab setting to help immerse the participants. In terms of creating the bones, each bone went through a process that included laser scanning, 3D sculpting, retopology, and texturing. The realistic muscle models went through a similar process, but 1-on-1 sculpting sessions with our anatomy experts replaced laser scanning. Symbolic muscles were created through muscle effects in Autodesk Maya. The same program was used to create the model stand and lab environment. Rigging and animating had to be done on both thoracic limbs, so they could complete a walk cycle and show muscle contractions accordingly. Programming in Unity was also required to set up the VR equipment, and interactive actions, to run with the application.

The process of photogrammetry entails the use of photography to 3D map objects based on their distance. Our experimentation with photogrammetry resulted in poor quality scans that we were unable to use. This process did not allow us to achieve the level of anatomical accuracy that we desired. Laser scanning was found to be the most effective way for us to initially create the bones models, and we used the “XYZ Scan Handy” laser scanner. The scanner has a sensor built in and once the object is recognized and in focus it produces an OBJ file (a 3D digital model file). However, after the scan is complete and the model is created, there is still a touch-up process involved.

We used a 3D software called “Sculptris” (Fig. 1) that let us control and fix any problems with the topology of each bone. The topology refers to the 3D grid or mesh consisting of vertices, faces, and edges that shapes the object. We went through each of the five bones on the thoracic limb (the scapula, radius, ulna, humerus, and carpal bones) and assured there were no errors in the topology to avoid texture complications. For the scapula and carpal bones, we were forced to bring them into another application before Sculptris, called Autodesk Maya to repair holes in the mesh from the laser-scans.

Fig. 1.
figure 1

Sculpting process

Texturing was the final step to have finished muscle assets that could be rigged and animated. We strived to create a texture that would look like real muscles. Using software called Substance Painter, we created multiple layers to influence the texture of the models. This allowed us to get a two-toned, red/orange texture showing striations similar to a real muscle (Fig. 2).

Fig. 2.
figure 2

Muscle texturing

2.2 Creating 4 Unique Conditions for Experimentation

Four unique versions of the application were created from the pieces that have been produced so far. The four different versions are realistic interactive (Fig. 3), realistic non-interactive, symbolic interactive, and symbolic non-interactive (Fig. 4). The interactive versions of our application had buttons that enabled the user to control the thoracic limb.

Fig. 3.
figure 3

Realistic interactive

Fig. 4.
figure 4

Symbolic non-Interactive

We programmed the interactive application to control the animation speed of the walk cycle, and the rotation of the thoracic limb and base. The rotation ability allows us to rotate the thoracic limb and also teach the user four different anatomical views (lateral, medial, cranial, and caudal). When the model rotates, part of the base rotates at the same rate to display the corresponding view. Playing and pausing the animation teaches the user about the reciprocal relationship between the bicep and triceps (Fig. 5).

Fig. 5.
figure 5

Anatomical views

By pointing the VIVE hand-controller laser at a specific button, the user can press the trigger on the back of the controller to activate the function of that button. The non-interactive versions of our application consisted of no buttons, and the user did not receive a controller. Muscle representation is the only difference between the realistic and symbolic versions of our application. In the symbolic version emphasizes muscle contractions more easily, but the realistic version provides a realistic contraction.

3 User Studies

3.1 Participants

The user studies were conducted to give us a better understanding of how effective the different methods would be on musculoskeletal movement retention and anatomical identification. We used dynamic visualizations in our application to help engage spatial visualization. A dynamic visualization is a way to represent material that involves rotational movement and analysis for a more in-depth study. To learn the information represented in the teaching module, students must be able to mentally visualize the canine thoracic limb. We recruited 24 participants who had never studied a university level anatomy course before, and randomly assigned them 1 of 4 versions based on their Vz scores. Spatial Visualization, or Vz, is the ability to apprehend, encode, and manipulate mental representations. Before the experiment, participants’ spatial visualization abilities were assessed using the Revised Purdue Spatial Visualization Test (). In addition, students’ comprehension of anatomical information was assessed using a post-test involving anatomical views, joint locations, and muscle contractions.

3.2 Study Procedure

In this study, we collected data through quantitative and qualitative means along with recording the user experience to fully analyze the experiment. In the quantitative data, the main analyzations revolved around comparing the post-test scores. We also compared the users’ Revised PSVT:R scores that determined if they had high or low Vz abilities. After that we compared the effectiveness of each of the applications by sorting the results respectfully. The qualitative data we received came from an analysis of the user’s comprehension of anatomical information which was assessed using a post-test involving anatomical views, joint locations, and muscle contractions. We finished the study by asking questions to the participants about their experiences they had during the study.

figure a

4 Results

4.1 Learning Experience

Overall, we saw varying results from each VR condition that are worth noting. The non-interactive scenes had better scores than the interactive scenes, and the realistic versions scored better than the symbolic versions for both levels of interactivity. We can observe that (see Table 1) symbolic interactive did alarmingly worse than all the other conditions. Even though the sample size was not large enough but the results are still credible. The realistic muscle conditions should be more efficient at helping with identification, and the interactivity ended up distracting the participants from learning.

Table 1. Post-study score graph

We also noticed in a few studies that the participants preferred walking around the model in virtual reality over rotating the model, but others preferred rotation. The interactivity of the application had some influences here. Non-interactive conditions required students to walk around the model in order to review anatomical views. The non-interactive system proved to be less distracting based on test scores, but also from the qualitative information gathered during the post-study interview. Users could focus more on learning the anatomy information because there was nothing else presented in the application to draw attention away from the user.

4.2 Study Conditions

The interactivity of the system defined the rest of the VR conditions, being either interactive or non-interactive. The interactive condition had 5 buttons that controlled the thoracic limb’s walk cycle animation speed and rotation on the y (vertical) axis. The non-interactive version had no interactive elements. The participants were read slightly different scripts during the application to account for this change.

The average score for the non-interactive version is more than double the average from the interactive version. Looking at why the non-interactive versions did so much better in this section of the test, we see that several participants who mixed up their anatomical views from the first 4 questions of the test also did on this section for the same reason. In each section of our anatomy test, non-interactive learning had the best memory retention.

5 Discussion

The realistic non-interactive scene scored best while the realistic interactive version scored slightly lower because participants struggled to understand bicep/triceps contractions as effectively. The shocking, seemingly coincidental result is the symbolic interactive version with the lowest score. Aside from the outlier in the symbolic non-interactive scene, all the worst scores on the test happen to emerge from this VR condition. Analyzing each test individually, we see participants primarily chose opposite anatomical views, but determining muscle contraction also caused problems, and sometimes both were switched. Based on previous research and some of our hypotheses, the results show what is to be expected, but the way in which they have come to be is quite questionable. A larger scale study would be beneficial in determining more accurate numbers, and definitively proving the results we found from this study. Additionally, we noticed in a few sessions that the participants preferred walking around the model in VR rather than rotating the model, but others preferred rotation. The interactivity of the application had some influence here because non-interactive conditions required students to walk around the model to review anatomical views and see muscle contractions from different angles if they were inclined to.

5.1 Conclusion and Future Plan

This experiment utilizes virtual reality technology to assess varying teaching methods of canine anatomy using dynamic visualizations. Spatial visualization, muscle representations, and levels of interactivity were tested as independent variables in our user studies to determine which conditions would promote the most effective form of memory retention. We observed through 24 user studies that low spatial visualization users gained an advantage through dynamic visualization learning to almost perform as well as their high spatial visualization counterparts. Realistic muscles assisted participants with identifying anatomical views more efficiently, and therefore had a significantly better average compared to the symbolic representation. Despite the symbolic muscle representation’s simplistic contractions, first time anatomy learners still performed better in the realistic version. The non-interactive system proved to be less distracting based on test scores, but also from the qualitative information gathered during the post-study interview. Users could focus more on learning the anatomy information because there was nothing else presented in the application to draw attention away from the user. Because of the small sample size, additional user studies should be done with this experiment for more accurate results, but the conclusion should be expected to show similar findings.