Keywords

1 Introduction

The ease of product or system use in the context of the user is determined by usability testing (Conyer 1995). The concept of usability testing is applied in human computer interaction (HCI) research, and has provided guidelines and principles of software designs especially user interfaces (Shneiderman 1992; Mayhew 1992; Nielsen and Lavy 1994). While good software user interface ensures high degree of acceptability, it is not a substitute for usability testing (Granic and Cukusic 2011). Usability is defined as the degree of ease as well as effectiveness of use (Shackel 1984) and usability testing is the evaluation of a product to determine whether it achieves the intended use satisfactorily in an effective and efficient manner (Rubin and Chisnell 2008).

According to Conyer (1995) there are two approaches of usability evaluation, namely user testing and usability testing. User testing involves the analysis of user behavior when using the product while usability testing is evaluation of the product to establish whether it meets the expected results satisfactorily when performing a task.

This study combines usability and user competence testing within the context mobile learning. The study is guided by the ability of user to perform a learning task by empirically measuring the human interaction elements of the product or prototype results (McLaughlin 2003) in relation to modified Bloom’s taxonomical learning levels. It involves monitoring each interaction between device and user during a learning activity. Information is gathered through observation, user response, and measurement of user performance.

Mobile technology has been accepted and adopted quickly by learners, showing high ownership and preference. A longitudinal data survey indicates that despite high use of this device, the use in learning is not as widespread as the devices themselves (Dahlstrom and Bichsel 2014). Less than 50% of learners do classwork daily from mobile devices at home (Wright 2013), which is relatively small compared to percentage ownership. This indicates that device ownership is not a direct relationship to proficiency or usability especially in learning (Chen et al. 2015). Accordingly, usability testing methodology for e-learning or mobile learning systems do not exist (Granic and Cukusic 2011) and there is a need for research and empirical evaluation for mobile learning products. This study offers such an evaluation and contributed by combining Hans et al. (2008) usability evaluation model with educational evaluation (Nielsen 1993) by means of two sets of criteria, namely learning with software heuristics (Squires and Preece 1996) and pedagogical dimensions (Leslie 2016). It is expected that the contribution with its general findings will facilitate the understanding on how to evaluate and improve the usability testing of education technologies, especially mobile learning, before adoption.

2 Literature Review

This study reviewed literature on usability testing in general, and included ergonomics and pedagogical learning, as well as an assessment of previous research. After an extensive review, a conceptual model was built by combining elements of the Jigsaw model by Squires and Preece (1996); Kim and Han (2008) model, and a modified Bloom’s taxonomy model through a logical decomposition framework of various elements. These models were found suitable for consideration in this study.

3 Aim of the Research

The aim of this study was to provide an innovative and systematic framework for mobile learning technologies usability testing.

4 Conceptual Framework of the Study

Previous models of usability evaluation were used to formulate the conceptual framework, which was used to develop the current usability-testing model. The conceptual model has three stages:

Stage 1: Classify usability dimensions

Informed by human interaction elements, the HIE was classified into design features of both hardware and software and user response features, which are impressive features that are purely subjective.

Stage 2: Develop usability measures of learning

Learner’s specific learning tasks (Squires and Preece 1996) are identified and mapped onto the features of HIE then classified according to Bloom’s taxonomy (cognitive, affective or psychomotor). The measures are leveled against learning outcomes based on the way operational tasks integrate to meet the learners’ needs (Squires and Preece 1996).

Stage 3: Build usability model

The elements that show strong relationships between observation and perception of the learner are selected to construct the model.

5 Methodology

This study used a focus group method to identify, group and define the HIEs for the model. The researchers prepared the guidelines and schedule for the focus groups chosen from a tertiary institution. During the discussions, the researchers moderated the proceedings while each group appointed a rapporteur. The focus group was composed of 10 purposefully selected lecturers (5 from computer science and 5 from education) and 15 learners (10 from computer science and 5 from education). The sample size was the recommended number of participants in a focus group, which is between four to ten (MacIntosh 1993; Goss and Leinbach 1996; Kitzinger 1995), and the discussion sessions lasted between one to two hours (Powell and Single 1996).

6 Procedure

This study was granted ethical clearance and a go ahead by the University management where it was conducted. No participant was forced to participate in the study.

The researchers recruited the focus group participants and called for plenary sessions for briefing. Five groups were formed; each group had 2 lecturers and 5 learners. The group members were randomly selected. There were two sessions, the first was for brainstorming and the second was for defining HIE measures. Using the HIE’s measures, a scorecard was developed to aid the usability evaluator to collect data.

The study begun by defining the usability dimensions through logical selection criteria, then usability measures for the model was developed. The model was tested in an iPad project, which was used for teaching in an extended program with 98 learners. The learners were each given an iPad, which they used for reading, referring and access to internet. Data were collected using a questionnaire that was given to the learners. Five computer science lecturers evaluated the usability of iPad using a score card. Learners were given a questionnaire to assess their experiences about iPad as a learning tool. The two results were computed and correlated.

7 Focus Group Activities

The first session required the group to identify suitable dimensions for usability testing. Researchers provided a list of dimensions from various models to choose from, but the groups were allowed to suggest new dimensions. The selection criteria were provided, as indicated in Fig. 1. After a break, a plenary session was called and each group’s rapporteur presented their list of dimensions. The plenary agreed on the final list of dimensions to be used in the model.

Fig. 1.
figure 1

A logical flow method for selecting usability dimensions

In the second session, each group was required to come up with HIEs for each dimension and also to suggest a measure for it. All reports were collected from the rapporteurs and a smaller group of lecturers who participated, evaluated and compiled the final elements of the model and measures for all HIE’s.

The criteria for evaluating the dimensions is shown in Fig. 1. The dimensions were categorized as performance, impression or learning indicator and mapped onto the Bloom’s taxonomical order. The mapping was an assessment to check if the dimension promoted any of the learning levels as classified by Bloom. If the dimension did not fit in any of the domains then it was rejected and if accepted, the dimension could be grouped together and renamed.

The final model of usability dimensions included Application Proactivity, Consistency, Memorability, Interactivity, Connectivity, Efficiency and Satisfaction.

8 Results and Interpretation

In this study, 64.8% of the participants were male and 35.2% were female. The average age was 20 years. The usability evaluations showed that most of the learners’ (88.1%) were satisfied with iPad as a learning tool, with 59.5% strongly agreeing, 28.6% just agreeing, 2.4% neutral and 4.8% disagreeing. Table 1 shows the distribution of response to questions. From the results it is clear that the majority of learners were satisfied and had a good learning experience, however, there were a few who were not satisfied.

Table 1. Table a summary of learners’ response to iPad usability

9 Conclusion

This study proposed a usability testing for mobile learning technologies comprising a collective set of human interactive elements’ measures associated with mobile technology and learning. The results showed that there was a high correlation between evaluations of learners and experts (lecturers) and the overall mobile learning was perceived to satisfy learners.