Abstract
Mobile devices have gained popularity among learners prompting educators to adopt mobile for teaching and learning. Despite high use of mobile devices amongst learners, there is no direct relationship between device ownership and its usability. Educators demand that a technological learning tool must be usable and effective in supporting the way learners learn, however, there are few tools to evaluate usability of mobile devices within the context of learning. Using focus groups and learners as participants, this study, developed a framework for usability testing. Suitable Human Interactive Elements (HIE’s) were proposed by the focus group with their associated measurements to computer indexes. The framework was validated and used to evaluate iPad use in class at a tertiary institution. The results revealed that 88.1% of learners were satisfied learning with iPad with a correlation of .926 between learners’ and lecturers usability testing.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The ease of product or system use in the context of the user is determined by usability testing (Conyer 1995). The concept of usability testing is applied in human computer interaction (HCI) research, and has provided guidelines and principles of software designs especially user interfaces (Shneiderman 1992; Mayhew 1992; Nielsen and Lavy 1994). While good software user interface ensures high degree of acceptability, it is not a substitute for usability testing (Granic and Cukusic 2011). Usability is defined as the degree of ease as well as effectiveness of use (Shackel 1984) and usability testing is the evaluation of a product to determine whether it achieves the intended use satisfactorily in an effective and efficient manner (Rubin and Chisnell 2008).
According to Conyer (1995) there are two approaches of usability evaluation, namely user testing and usability testing. User testing involves the analysis of user behavior when using the product while usability testing is evaluation of the product to establish whether it meets the expected results satisfactorily when performing a task.
This study combines usability and user competence testing within the context mobile learning. The study is guided by the ability of user to perform a learning task by empirically measuring the human interaction elements of the product or prototype results (McLaughlin 2003) in relation to modified Bloom’s taxonomical learning levels. It involves monitoring each interaction between device and user during a learning activity. Information is gathered through observation, user response, and measurement of user performance.
Mobile technology has been accepted and adopted quickly by learners, showing high ownership and preference. A longitudinal data survey indicates that despite high use of this device, the use in learning is not as widespread as the devices themselves (Dahlstrom and Bichsel 2014). Less than 50% of learners do classwork daily from mobile devices at home (Wright 2013), which is relatively small compared to percentage ownership. This indicates that device ownership is not a direct relationship to proficiency or usability especially in learning (Chen et al. 2015). Accordingly, usability testing methodology for e-learning or mobile learning systems do not exist (Granic and Cukusic 2011) and there is a need for research and empirical evaluation for mobile learning products. This study offers such an evaluation and contributed by combining Hans et al. (2008) usability evaluation model with educational evaluation (Nielsen 1993) by means of two sets of criteria, namely learning with software heuristics (Squires and Preece 1996) and pedagogical dimensions (Leslie 2016). It is expected that the contribution with its general findings will facilitate the understanding on how to evaluate and improve the usability testing of education technologies, especially mobile learning, before adoption.
2 Literature Review
This study reviewed literature on usability testing in general, and included ergonomics and pedagogical learning, as well as an assessment of previous research. After an extensive review, a conceptual model was built by combining elements of the Jigsaw model by Squires and Preece (1996); Kim and Han (2008) model, and a modified Bloom’s taxonomy model through a logical decomposition framework of various elements. These models were found suitable for consideration in this study.
3 Aim of the Research
The aim of this study was to provide an innovative and systematic framework for mobile learning technologies usability testing.
4 Conceptual Framework of the Study
Previous models of usability evaluation were used to formulate the conceptual framework, which was used to develop the current usability-testing model. The conceptual model has three stages:
Stage 1: Classify usability dimensions
Informed by human interaction elements, the HIE was classified into design features of both hardware and software and user response features, which are impressive features that are purely subjective.
Stage 2: Develop usability measures of learning
Learner’s specific learning tasks (Squires and Preece 1996) are identified and mapped onto the features of HIE then classified according to Bloom’s taxonomy (cognitive, affective or psychomotor). The measures are leveled against learning outcomes based on the way operational tasks integrate to meet the learners’ needs (Squires and Preece 1996).
Stage 3: Build usability model
The elements that show strong relationships between observation and perception of the learner are selected to construct the model.
5 Methodology
This study used a focus group method to identify, group and define the HIEs for the model. The researchers prepared the guidelines and schedule for the focus groups chosen from a tertiary institution. During the discussions, the researchers moderated the proceedings while each group appointed a rapporteur. The focus group was composed of 10 purposefully selected lecturers (5 from computer science and 5 from education) and 15 learners (10 from computer science and 5 from education). The sample size was the recommended number of participants in a focus group, which is between four to ten (MacIntosh 1993; Goss and Leinbach 1996; Kitzinger 1995), and the discussion sessions lasted between one to two hours (Powell and Single 1996).
6 Procedure
This study was granted ethical clearance and a go ahead by the University management where it was conducted. No participant was forced to participate in the study.
The researchers recruited the focus group participants and called for plenary sessions for briefing. Five groups were formed; each group had 2 lecturers and 5 learners. The group members were randomly selected. There were two sessions, the first was for brainstorming and the second was for defining HIE measures. Using the HIE’s measures, a scorecard was developed to aid the usability evaluator to collect data.
The study begun by defining the usability dimensions through logical selection criteria, then usability measures for the model was developed. The model was tested in an iPad project, which was used for teaching in an extended program with 98 learners. The learners were each given an iPad, which they used for reading, referring and access to internet. Data were collected using a questionnaire that was given to the learners. Five computer science lecturers evaluated the usability of iPad using a score card. Learners were given a questionnaire to assess their experiences about iPad as a learning tool. The two results were computed and correlated.
7 Focus Group Activities
The first session required the group to identify suitable dimensions for usability testing. Researchers provided a list of dimensions from various models to choose from, but the groups were allowed to suggest new dimensions. The selection criteria were provided, as indicated in Fig. 1. After a break, a plenary session was called and each group’s rapporteur presented their list of dimensions. The plenary agreed on the final list of dimensions to be used in the model.
In the second session, each group was required to come up with HIEs for each dimension and also to suggest a measure for it. All reports were collected from the rapporteurs and a smaller group of lecturers who participated, evaluated and compiled the final elements of the model and measures for all HIE’s.
The criteria for evaluating the dimensions is shown in Fig. 1. The dimensions were categorized as performance, impression or learning indicator and mapped onto the Bloom’s taxonomical order. The mapping was an assessment to check if the dimension promoted any of the learning levels as classified by Bloom. If the dimension did not fit in any of the domains then it was rejected and if accepted, the dimension could be grouped together and renamed.
The final model of usability dimensions included Application Proactivity, Consistency, Memorability, Interactivity, Connectivity, Efficiency and Satisfaction.
8 Results and Interpretation
In this study, 64.8% of the participants were male and 35.2% were female. The average age was 20 years. The usability evaluations showed that most of the learners’ (88.1%) were satisfied with iPad as a learning tool, with 59.5% strongly agreeing, 28.6% just agreeing, 2.4% neutral and 4.8% disagreeing. Table 1 shows the distribution of response to questions. From the results it is clear that the majority of learners were satisfied and had a good learning experience, however, there were a few who were not satisfied.
9 Conclusion
This study proposed a usability testing for mobile learning technologies comprising a collective set of human interactive elements’ measures associated with mobile technology and learning. The results showed that there was a high correlation between evaluations of learners and experts (lecturers) and the overall mobile learning was perceived to satisfy learners.
References
Chen, B., Seilhamer, R., Bennett, L., Bauer, S.: Students’ mobile practices in higher education: a multi-year study. EDUCAUSE Rev. (2015)
Conyer, M.: User and usability testing – how it should be undertaken? Aust. J. Educ. Technol. 11(2), 38–51 (1995)
Dahlstrom, E., Bichsel, J.: ECAR Study of Undergraduate Students. EDUCAUSE Center for Analysis and Research (2014)
Goss, J., Leinbach, T.: Focus groups as alternative research practice 28(2), 115–123 (1996)
Granić, A., Ćukušić, M.: Usability testing and expert inspections complemented by educational evaluation: a case study of an e-learning platform. Educ. Technol. Soc. 14(2), 107–123 (2011)
Han, S., Yun, M., Kim, J., Kwahk, J.: Evaluation of product usability: development and validation of usability dimensions and design elements based on empirical models. Int. J. Ind. Ergon. 26(4), 477–488 (2000)
Kim, J., Han, S.: A methodology for developing a usability index of consumer electronic products. Int. J. Ind. Ergon. 38, 333–345 (2008)
Kitzinger, J. Introducing focus groups. Br. Med. J. 311, 299–302 (1995)
Kreitzer, A., Madaus, G.: Empirical investigations of the hierarchical structure of the taxonomy. In: Anderson, L.W. (ed.) Bloom’s Taxonomy: A Forty-Year Perspective. Ninety-Third Yearbook of the National Society for the Study of Education, p. 24. University of Chicago Press, Chicago (1994)
Leslie, O.: The second principle. Three domains of learning – Cognitive, affective, psychomotor (2016). http://www.thesecondprinciple.com/instructional-design/threedomainsoflearning/. Accessed 22 June 2017
MacIntosh, J.: Focus groups in distance nursing education. J. Adv. Nurs. 18(12), 1981–1985 (1993)
Mayhew, D.: Principals and Guidelines in Software User Interface Design. Prentice-hall, Englewood Cliffs (1992)
McLaughlin, R.: Redesigning the crash cart: usability testing improves one facility’s medication drawers. Am. J. Nurs. 103(4), 64A–64F (2003)
Nielsen, J.: Usability Engineering. AP Professional, New York (1993)
Nielsen, J., Lavy, J.: Measuring usability: preference vs. performance. Commun. ACM 37(4) (1994)
Powell, R., Single, H.: Focus groups. Int. J. Qual. Health Care 8(5), 499–504 (1996)
Rubin, J., Chisnell, D.: Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests, 2nd edn. Wiley, Indianapolis (2008)
Shackel, B.: The concept of usability. In: Bennet, J.C. (ed.) Visual Display Terminals, pp. 45–87. Prentice-Hall, Englewood Cliffs (1984)
Shneiderman, B.: Designing the user interface: strategies for elective human computer interaction. AddisonWesley, Reading (1992)
Squires, D.: Preece, J. Usability and learning: evaluating the potential of educational software. Comput. Educ. 27(1), 15–22 (1996)
Wright, E.: EDU Survey: How are University students, Faculty and administrators using technology? (2013). https://www.box.com/blog/edu-survey-how-are-university-students-faculty-and-administrators-using-technology/. Accessed 17 Apr 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Wario, R., Ngari, B. (2018). A Framework for Mobile Learning Technology Usability Testing. In: Stephanidis, C. (eds) HCI International 2018 – Posters' Extended Abstracts. HCI 2018. Communications in Computer and Information Science, vol 852. Springer, Cham. https://doi.org/10.1007/978-3-319-92285-0_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-92285-0_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-92284-3
Online ISBN: 978-3-319-92285-0
eBook Packages: Computer ScienceComputer Science (R0)