Keywords

1 Introduction

Advances in technology have led to increased opportunities for computer-based training (CBT) that can be accessed anytime and anyplace for self-guided learning. Effective computer-based instruction can provide extremely flexible training that is less costly, labor- intensive and time-consuming than classroom training. It can also provide a complement to traditional learning environments that can increase the rate of knowledge acquisition along the spectrum from basic to higher-order cognitive skills.

The challenge in the development of CBT is the rapid and cost effective creation of robust and meaningful training content that can be used across multiple contexts, truly advances learning, and promotes retention over the long term. The military and other organizations have explored the benefit of RLOs, which “chunk” small learning components together that can be used over and over for promoting operational skill development (Beck and Baggio 2007). While emerging as a viable content reusability concept, RLOs are only effective if they are high in quality.

As such, content developers need support in authoring RLOs that are effective, situation-appropriate and meet the educational needs of a multitude of trainees. This is critical as training that fails to adequately address the key knowledge, skills, abilities, and other attributes (KSAOs) pertinent to a context will fail to convey what, in some cases, may be a complicated set of instructional elements that underlie core competencies of the domain.

The aim of our research was to study effective and appropriate methods to provide guided instructional design support to subject matter experts (SMEs), instructional designers, and inexperienced training developers for rapidly generating RLOs. An emphasis was placed on defining methods for integrating best practices and proven instructional design principles within the authoring component to aid the end user in building good instructional strategies within RLOs. This research resulted in the development of an easy-to-use authoring tool, implemented for a mobile platform that effectively guides users in the rapid development of instructional content that promotes learning. The mobile app was developed to advance the Army’s goals for rapidly addressing emerging training needs by helping users organize their knowledge into effective learning objects and by supporting ongoing training research initiatives, such as the Army Research Laboratory’s Adaptive Training Research program which is exploring rapid, reusable content development for intelligent tutoring systems.

2 Adaptive Training

With the release of the “Army Learning Concept for 2015,” (TRADOC 2011) the Army signaled a need to change the way it trained. One goal was to reduce the amount of lecture-based classes, and move towards “…more engaging technology-delivered instruction that will be used as part of a blended learning approach, distributed to the workforce for job-related sustainment learning, and as performance support applications.” As a result, a repository of learning modules will be needed “…to support career progression, assignment-oriented learning, operational lessons, and performance support aids and applications.” In addition, intelligent tutors will be able to tailor the learning experience to the individual learner.

To support this vision, the Army Research Laboratory began its Adaptive Training research program. The focal point of the program is the Generalized Intelligent Framework for Tutoring (GIFT), an open source software package to build intelligent tutoring systems. The cloud version of the GIFT authoring tools provides an easy to use flow chart for developing the tutors and allows incorporation of multimedia objects, quizzes/surveys, etc. (Fig. 1). GIFT also has a pedagogy module that supports adaptation based on learner characteristic and performance (Sottilare et al. 2012).

Fig. 1.
figure 1

The GIFT authoring tool

3 Prototype Development

Our research studied effective ways for guiding users in the development of RLOs. The work included the development of a functional authoring system prototype that provides a robust set of capabilities for rapid authoring of instructional content for a mobile computing environment. Specifically, the rapid authoring platform (RAP) is an Android-based mobile platform that:

  • steps users through the development of RLOs, Lessons, and short performance assessments associated with RLOs and lessons

  • provides means to integrate multi-media assets such as video, text, audio, and still imagery within RLOs

  • provides guidance that ensures the inclusion of various instructional system design (ISD) elements within RLOs and Lessons

  • supports the saving of RLOs in a sharable content format

3.1 User Needs Analysis

This task was aimed at completing user research in order to establish user needs for the proposed rapid authoring platform. The work included reviewing best practices in the development of authoring systems and in the creation of RLOs. The review provided the initial basis of use cases for the application. Formal use cases were created and captured within a use case document. The use cases were internally baselined and a user flow was established. The high level steps or user flow for RLO development is presented in Fig. 2.

Fig. 2.
figure 2

Steps for RLO development supported by RAP

Based on our user need analysis, a guided authoring approach was conceptualized to “walk” the user through numbered steps for the creation or editing of RLOs and/or lessons of grouped RLOs. The instructional content development starts with the establishment of enabling learning objectives and capturing relevant actions (tasks), conditions, and standards which are familiar to military users. The next step involves adding or creating media content for the RLO. Multiple media components may be utilized within a single RLO. The media content may be a video, an audio file, text, a graphic or image, or some combination of these elements. For easy authoring, the user selects an instructional template. The templates ensure some standardization in the development of multimedia components. As media is added to RLO, it can be sequenced to meet learning objectives. Once an RLO is created, the system takes the user to the next step, the creation of an assessment for the RLO. The RLO assessment may be one or more questions to test knowledge acquisition. The last two steps that the users are guided through involve previewing the RLO and completing the creation by saving and/or uploading the RLO.

In a similar manner to what has been described for RLO creation, the RAP was conceptualized to guide a user through the authoring of lessons. Lessons are RLOs that are sequenced together to create a “larger” or more complex instructional object. Multiple RLOs are utilized to make a lesson. The RLOs can be sequenced as needed to meet terminal learning objectives. Lessons include assessments that are associated with the included RLOs and may include additional assessments to evaluate knowledge acquisition for the learning objectives. The high level steps or user flow for lesson development with RAP is presented in Fig. 3.

Fig. 3.
figure 3

Steps for lesson development supported by the RAP

3.2 System Design Requirements

The user experience (UX) process begins with understanding user needs. This key component is accomplished through UX methods such as contextual inquiries, interviews, task analyses and persona creation. While there was limited access to subject matter experts within specific domain of interest (e.g., military maintenance), the research team and instructional system design personnel worked to develop exemplar personas for identifying user needs. An in-house military domain expert answered a series of questions regarding use of a computer-based tool for creating quick “hip pocket” lessons for short impromptu training opportunities. The information gathered was utilized to create work flows, as well as use cases for the RAP. This information informed the initial design of the user’s experience and user interface (UI).

Wireframing was used as an iterative process that combined the information architecture, navigation and UI component placement for the application. The wireframes were internally baselined and utilized in the development of initial system design requirements. Dynamic wireframes were completed to support walkthroughs and early evaluation of the design concepts. The multidisciplinary team worked together to evaluate the user flow, and projected functional components of the system. As the design solidified, user interface design and interaction design were completed as part of the system design. The final UI design and interaction design guided the refinement of the design specification which guided the prototype development. Figure 4 shows the progression from rough sketch wireframes of the RAP to high fidelity interactive wireframe of the user interface.

Fig. 4.
figure 4

Wireframing iterations of the RAP user interface

3.3 Building the Prototype Authoring Tool

An agile software development process, including full regression quality assurance (QA) testing at each software release, was implemented in the development of the RAP. The agile development process relies on solid code being generated and functionality being completed in two-week sprint cycles. User interface designs consisting of high-fidelity mockups, along with associated use cases, were reviewed at least one sprint in advance by the engineering team. QA testing was performed one sprint after the sprint in which development work was completed. The independent team of QA personnel verified the functionality of each deliverable software feature. The code was deemed completed when it passed internal unit testing written by the developer, external integration testing with the overall build, and QA testing for the overall product. Evaluations of functions and features at each sprint were also completed by the human factors personnel on the team. At the later stages of development, the sponsor was able to review the working software product for functionality and suitability for meeting operational utility.

3.4 Creating Exemplar RLOs

During the RAP development, instructional system design (ISD) personnel supported the RLO team with instructional design, user experience and usability testing. In order to create reusable learning objects on the app, solid content was needed. The team searched for appropriate media for illustrating military-relevant content within the RAP. A video was selected – ‘‘Disassembly, Assembly and Cleaning of the M16 A2.” The video is adequate for novice warfighters/learners who are unfamiliar with the disassembly, assembly, and cleaning of a weapon. The ISD team constructed a list that explained each step in the three sections of the video. The video was downloaded and edited using Giphy Capture [https://giphy.com/apps/giphycapture] to create a series of animations in the Graphics Interchange Format (GIF). All of the GIF files were labeled and organized in a Google Drive folder.

In the creation of lessons and RLOs with the RAP, the ISD team began by establishing the learning goals or objectives, which drove the requirements for the associated RLOs. For the RAP example content, the goal and outcome of one lesson was defined – to support a single learner in gaining the ability to identify parts of the M16 A2 rifle and the steps needed to effectively disassemble the rifle. Enabling and terminal learning objectives were established. The learning objectives were conceptualized in three commonly identified components: actions, conditions, and standards. For setting the condition, the ISD team reviewed the video to identify supplies required to complete the task. For the standard, the team included criteria that should be met by the learner, for example, disassemble the M16 A2 rifle within 60 s. The system is designed to guide users in setting up enabling and terminal learning objects, thereby embedding important instructional system design components to boost quality of learning content. For an RLO, a single action (i.e., what the trainee should be able to do) is captured for the objectives, though multiple conditions under which the actions must be completed may be set. Similarly, a single standard (i.e., the required level/quality of performance) is put in place. Standards may include reference to number of errors or time to complete an action. The system guides the user, but allows for flexibility with establishing the learning objectives.

Following the establishment of the objectives of the RLO, the ISD team created instructional content utilizing the embedded templates as part of the RAP tool. The templates were designed by the UI/UX and ISD team to guide users in the integration of multimedia with instructional material and content. Templates provide a standardization among RLOs that increases their overall quality and utility. A screen capture of available RAP templates for integrating images, text, video, and audio is presented in Fig. 5. Multiple options exist for each media type.

Fig. 5.
figure 5

Templates available to guide standardized RLO development

4 Prototype Evaluation

An informal subjective assessment of the software was conducted during RAP development. Five (5) target users completed a walkthrough of the tool with a researcher and utilized the tool to create an example RLO. Feedback was collected from the users on potential pain-points, usability concerns, and areas of improvement with regard to features and functions. In addition, a preliminary evaluation of the RAP prototype was conducted using internal ISD reviews as described below.

4.1 Method

Three people with experience in developing training content were asked to provide feedback on the content creation portion of the RAP. The researchers specifically wanted people who varied in level of experience in creating training content, level of comfort with technology, domain of training content. Level of experience in creating content varied from 5–10 years across participants (average of 6.67 years), level of comfort ranged from 3–4 (average of 3.67), with 5 being the highest level of comfort, and the domain of training content varied widely (e.g., project management and sales content vs. employee relations and communication content vs. industrial machinery and medical content).

Each person was first given a quick, high-level demonstration of the RAP. They then completed a cognitive walkthrough procedure in which they performed a series of tasks within the RAP and were asked to simultaneously speak aloud what they were trying to do and any thoughts they had about things they liked, things they didn’t understand or what they expected to happen, and things that could be improved. Tasks included aspects of creating a new RLO, editing the title and objective, adding new content, adding assessments, and previewing the RLO), editing a current RLO (e.g., adding content and assessments), copying RLOs, and deleting RLOs. In addition to the insights gained during the cognitive walkthrough, they were asked for any additional thoughts after completing the tasks. They then completed a System Usability Scale (SUS) questionnaire (Brooke 1996), which provides a composite usability score.

4.2 Results and Discussion

Feedback on the RLO system was generally positive, and most tasks could be completed quickly with very minimal support from the facilitator. The system was found to be overall well-designed for the intended purpose, and all three people said that they would like to use the system for the creation of content. However, there were some themes of improvements that would be helpful to the RAP system that came out of the usability testing in addition to some smaller detailed improvements. Some of the major themes will be discussed in the following sections, along with the SUS results.

4.3 Tutorial

Though each person was mostly able to get through the tasks without help, there were a few instances where they could not locate a particular aspect of the interface when it was not immediately obvious (e.g. the arrow that brings out the RLO preview bar). It became clear quite quickly that the addition of a simple user interface tutorial to walk the user through some of the common interface elements would go a long way in terms of preventing situations where the user took a long time or was not able to locate an aspect of the system. In most cases, when the aspect of the system was pointed out to them once, there was no future need to help them find it.

4.4 Overall Organization

The RAP system has a hierarchy made up of a variety of levels, including lessons made up of RLOs and RLOs made up of pieces of content (e.g. text, pictures, videos) and assessments (e.g. true/false and multiple choice questions). The current design of the RAP system interface does not easily distinguish (besides with labels) which level you are in when you are adding something new. The feedback was that the consistency of the interaction with the system is great, but the users would sometimes get disoriented in terms of where they were in the hierarchy. Two recommendations came out of this issue. One is to incorporate some type of design element into the interface to clearly distinguish which level the user is currently adding to, such as a different background color per level. The other recommendation was to add a bar on the left that provides an outline of the lesson or RLO currently being edited as a reference point for the user. It would also increase efficiency of use if there were hyperlinks within the outline that allowed the user to quickly move between levels.

4.5 Button Design and Placement

Although the buttons in the RAP interface were generally self-explanatory, there were a few exceptions that caused some confusion for the users. One was the plus sign button that is used throughout the system as a type of an enter button whenever a new item is being added (e.g., new title). The users repeatedly commented that it was a bit confusing because a plus sign generally indicates that you are about to add something new, not that you are entering information. This symbol can easily be changed to an arrow or another symbol that indicates entering information. Another example of a button that can use improvement is the arrow that brings out the preview panel when selected. This button seemed to be easy to miss, and even when it was detected, it was not clear what it did. Though the tutorial would help to let users know what it is there for, a redesign including possibly increasing the size and or color of the button would help it to stand out more as well.

4.6 SUS Results

The results of the SUS were mostly promising. The three scores were 67.5, 65, and 30 out of 100. A couple of the categories where the RAP system received high marks from all three users included the consistency of the system and the expectations that not much technical support would be needed to use the system. The two users with the higher scores also thought the system was not too complex and that not a lot of learning was needed to start using the system. The results of the SUS are certainly promising, given that this is the first version of the RAP system. It is expected that with a few revisions to the design and the addition of a tutorial, the usability of the system would increase a good amount.

5 Conclusion

The RAP prototype was developed to provide a robust set of capabilities for quick and easy development of effective RLOs, leveraging the capability of mobile devices to create rich, multimedia assets. The resulting Android-based mobile app: (1) guides users through the development of instructionally sound RLOs and lessons that are created from sequencing multiple RLOs; (2) provides support for including multimedia assets such as video, audio, still imagery, and text to convey knowledge, (3) supports the development of embedded assessments within RLOs, and (4) saves/packages the instructional content in a sharable format (as a sharable content object). In the near future, the resulting RLOs will be able to be imported into the GIFT authoring tools to create an individualized, adaptive learning experience.