Abstract
Currently, children and young people have more access and contact with digital technologies and they are also more present in schools. The aim of this study is to validate a set of functionalities proposed and implemented in an educational system, evaluating its usability and how it can be improved. The web-based system, called Mindboard, aims to facilitate collaboration in class and beyond it. The experiment involved students during a brief summer course to evaluate its use regarding collaboration and usability. After the last class, students answered a USE questionnaire about the system’s usability. This usability analysis showed that the system has, overall, good ease of use. Furthermore, the questions about positive points showed some advantages that we did not expect at first and lead us to interesting future works.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Currently, children and young people have more access and contact with digital technologies. Government programs in many countries are also promoting the availability of mobile devices, computers and Internet access to students and teachers in public schools. Students are also bringing their own devices to classrooms more and more often, in a trend known as “Bring Your Own Device” (BYOD), notebooks, tablets and smartphones are more present and used in class, and are also used to study after class. These devices could enrich the learning experience no matter in which place or time it occurs and not being restricted to in-class periods [1]. According to the Horizon Report [2], worldwide education institutions are gaining increasing quality and availability of internet access in their dependencies. With all these facts, a large ecosystem to support the use of technologies during classes and after them is being formed.
In this scenario, the aim of this study is to validate a set of functionalities proposed in an collaborative educational system, designed and developed with the intention of facilitating collaboration in class and beyond it but this relies, of course, on teachers pedagogical use of the tool. We will focus, then, in evaluating the usability of the system and how these results could be used to improve future versions.
This web-based educational system, named Mindboard, had a prototype developed with the chosen set of features. These features include allowing teachers and students to collaborate with each other during classes and outside them. During classes, teachers can share slides and source-code (or other forms of textual content generated during the class) and answer student questions in real-time. On the other hand, students can receive all shared content, annotate it and choose whether to share their notes and with whom, ask questions, answer questions from other students or from the teacher and supply anonymous feedback to teachers about their understanding of the class at each moment. All this information is logged and accessible to students and teachers later. Figure 1 shows a screenshot of Mindboard during a class.
Figure 2 shows another Mindboard feature: the display of textual content in real-time. In this example, the content was a Javascript source code.
Out of the class, users can watch classes again as video lessons, enriched with meta-data created during the class, such as the annotations, questions and answers, all synchronized with the content. During a programming online class, for instance, the video may show a piece of code and a note in the system is highlighted. Student can also ask questions and takes notes during online and asynchronous video lessons. The system can also be used in distance-only learning with these features.
After prototype development, Mindboard was used in an experiment with students in a summer course to evaluate its use regarding collaboration and usability.
In the next section we will talk about Computer-Supported Collaborative Learning and usability questionnaires. In Sect. 3 we discuss our experiment and in Sect. 4 we analyze its results. Later on, in Sect. 5 we leave our considerations and examine possible future work.
2 Background: CSCL and Usability Questionnaires
This paper describes an experiment using a computer system called Mindboard. Since it is a collaborative system, we need to understand the context in which it is inserted. Later in this section we describe the usability questionnaires that we considered and the USE Questionnaire, which was selected for use during the experiment.
2.1 Computer Supported Collaborative Learning
Educational systems are computational tools that aid in one or more of the processes involved in teaching and learning activities [3]. Their uses could serve one or more activities, such learning management, simulation, tutoring, or as a mean of communication and information exchange. When an educational system is used collaboratively it can be named as Computer Supported Collaborative Learning (CSCL).
CSCL is a special case of Computer Supported Collaborative Work (CSCW). CSCW as a way to supply a possible demand of the work market as early as 1985. By that time, researchers already saw a new scenario emerging in which workers could do their jobs collaboratively and not necessarily in the same place [4].
Mindboard is considered a CSCL system since it allows users to identify their role, share information with each other and allowing the users to discuss with each other about the class subject.
2.2 Usability Questionnaires and the USE Questionnaire
Usability analysis can be performed in many different ways such as interviews, behavior analysis and through questionnaires. Because mostly of time considerations and ease of recordability and data extraction, we choose to use questionnaires to study Mindboard’s usability. There are many already validated and well known questionnaires that could be used for this task. Some of those we considered for this work are described below.
The Questionnaire for User Interaction and Satisfaction (QUIS) is a tool created by a multidisciplinary team in human-computer interaction in lab of the Maryland University [5]. It is relatively long and attempts to break down usability in several specific aspects. In this work we opted for using a tool less focused in such details to get an overview of the question.
The questionnaire Computer System Usability Questionnaire (CSUQ) was designed by Jim Lewis and is licensed as public domain. It is reliable but it lacks for a standard [6].
The questionnaire Usefulness, Satisfaction and Ease of Use (USE), designed by Arnie Lund, has as its goal to analyze and summarize graphical interface usability using a model composed of three factors: usefulness, satisfaction and ease of use [7]. We believe these factors are the most important for our application, which should be used voluntarily, picked up quickly with as little training as possible or none whatsoever and be used transparently to avoid being an obstacle during classes or study time. It also gives an overview of these factors without going into very specific details in its questions, instead opting for providing ample opportunity for participants to make qualitative comments, which was our goal in this particular instance. It is also licensed as public domain and was the questionnaire we chose to use.
The USE questionnaire uses 27 questions with the Likert scale, with users answering questions split in 3 groups: Usefulness, Satisfaction and Ease of Use. For each question there is also a field where the user can enter some comment about it. At the end of the questionnaire, the user is prompted to name 3 positive and 3 negative points found in the system. In Sect. 3.1 we describe how we use and get the answers during the experiment and in Sect. 4 we discuss the results.
3 Experiment Description
The experiment was approved by USP’s Ethics Committee with protocol number 39888114.0.0000.5390.
3.1 Setting up the Course
The experiment was conducted during a programming course in another institution named “MEAN Stack in Practice”, in which students were introduced to web programming using NodeJS. The course was promoted online only, using local Facebook Groups about Information Technology and using the personal Facebook profile and contacts of one of the authors. After registration, we had 16 enrolled students, which were divided in two groups, one using the Mindboard system with 9 students and a control group not using it with 7 (the difference in the size of both groups happened mostly due to student schedules and other decisions outside our control).
The course took only four days and each group had classes in separate weeks. We conducted 3 classes in-locus and 1 online, using video-screen-casting. The course presented four main topics: MongoDB, ExpressJS, AngularJS and NodeJS. The main objective of the course was to introduce students to web programming using these technologies, allowing them to apply these subjects even during the classes. The two groups were exposed to the same content but in the Mindboard group we sent and presented all the material and source code using Mindboard, while the control group sometimes received class material via email. Students could access class content at any time after it was presented. Course registration was completely open to the public and only a small fee was charged to cover the costs of a coffee break during classes.
3.2 Data Collection
During the experiment a lot of usage and behavioral data was collected. Figure 3 shows the laboratory setup and how we captured the course in audio and video for posterior analysis. The web-cam used was a Logitech C920 and we captured the video in a resolution of 720p. These video records were used to count collaborations between students and with the teacher. We also used a survey to ask how the students collaborated with each other out of the classroom and without using Mindboard in both groups and to get a general idea how frequent this collaboration was. In the group that used Mindboard we also logged usage data using the system. After the last day of course, we presented participants in the Mindboard group with the USE questionnaire to inquire about the system’s usability. Google Forms was used as the medium for all questionnaires and surveys.
4 Results
The experiment occurred during the planned period as expected, but we had some problems with the collected usage data. The main problem that we encountered was the lack of a good amount of participants. We expected at first around 40 students split in two equal groups, but we had 16 (7 students in the first group and 9 in the second one). When we combine this fact with the short duration of the course, we ended up collecting a relatively small amount of usage data for Mindboard, compared to what we planned. Another fact that created a bias in our experiment was that, in this small universe of users, some groups of students already knew each other, what led to collaboration between them more naturally. Collaboration was an important metric in the analysis of Mindboard because it had been designed to aid mainly in this activity. Finally, collaborations were counted using the audio and video streams recorded during classes, but only during the analysis did we discover that the audio was unusable (the room was too large to be captured using only one microphone). Another problem we faced was related to the use of a single camera, which was occluded in certain moments. These factors led to our collaboration count having less quantitative importance than we expected at first and to a more careful analysis of the qualitative feedback supplied by students (and, of course, to the USE questionnaire to evaluate Mindboard’s usability).
With those caveats in mind, the number of interactions per student recorded in both Mindboard’s group and the control group was approximately the same. Qualitative analysis of these interactions, however, considering mostly student feedback and also, to a smaller degree, teacher feedback, suggests that in the control group, between 20 % and 30 % of interactions occurred because of difficulties in seeing the content being projected on a screen during classes, particularly when showing code, while in the Mindboard group there were no instances of this problem since students could and did follow content in their own devices. Thus the Mindboard group reported having more interactions of higher quality, collaborating and discussing about the content itself instead trying to decode it from what was projected. Another situation in which we detected a lot of interactions of little relevance in the control group was when the teacher alternated between source code files during class. In this kind of situation, one student or another always asked to go back to the last file, while others were already thinking about the new file, which caused unproductive interruptions during classes and appeared to have a negative impact on the stream of thought for both teacher and students. The feature that allows students to go back and forth in content and review it as they wish in their own device during classes helped many students in this sort of situation.
Students also reported that they suffered less with lack of attention using Mindboard to navigate within the content than in conventional classes. Finally, we had a couple students with moderate visual impairment who were quite thankful and excited about the system for facilitating their access to the content during classes (Mindboard has options to control parameters such as font size that help in this situation, but we confess we had not foreseen this advantage in accessibility before the experiment since it was not our primary focus).
Analyzing the USE Questionnaire data we discovered some interesting things. Regarding the system’s usefulness, Mindboard averaged 5.6 out of 7. Table 1 shows each question with its average score.
The mean score of 5.6 suggests that students considered the system with a good level of utility. The lowest score in this set of questions is about whether the system does everything that students expect it to do. We believe this occurred because many students would like to see more specific features in the system, such as, some way to integrate the real time text sharing with an Integrated Development Environment (which was specifically mentioned in the comment section of this question).
The set of questions about Ease of Use aims to measure the experience of the user with the system. Mindboard scored well in this aspect too, with a mean of 5.76 of 7. Table 2 shows each question and its average score. The question with the highest score was about whether the user could easily learn how to use it without instructions, with a score of 6.2.
In the set of questions related to user satisfaction, Mindboard also scored well, with an average of 5.7 out of 7. Table 3 shows each question’s score. The question with the highest score was about whether the user could easily remember how to use the system, with a score of 6.7. Considered along with the answers showing high learnability discussed above this shows that Mindboard is both easy to learn without instructions and easy to use without much effort once its learned. Because the system is designed for use in education, these aspect of high learnability and memorability are considered very important (even more than efficiency and more than they would be for production software) [8], both to encourage users to adopt the system and so that its use does not take attention away from the learning process.
Besides these scores and the comments for each question, the USE Questionnaire has one feature that was very useful for our analysis of Mindboard: it asks participants about the positive and negatives points of the system. It may look like a simple question, but the results of it were quite important for us in gathering qualitative feedback, explaining some of the scores and to collect suggestions to improve the system or for future features.
The positive points that were mentioned most often were related to ease of use and learning and having a friendly, intuitive and functional user interface. Many other comments mentioned the real-time content sharing as an important positive point as well as the integration with between annotations and video.
Negative points included complaints about the long time it took to load the system (because we only found out too late, already during the experiment, that we would only have internet access with rather low bandwidth). Other negative points mentioned were related to integration with other systems, such as Integrated Development Environments (IDE) and the lack of a real-time chat. After this experiment, we do have plans to perhaps include IDE support in future versions, even if it is an improvement very specific to a particular area of teaching. Regarding chat, however, we designed Mindboard with the intent of being used along with other learning management systems already adopted by institutions, not to replace them, and many of these systems already have support to several features that we consider very interesting but do not plan to replicate, such as real-time chat, discussion forums etc. During the experiment, however, we opted to use Mindboard on its own, which explains this lack of certain features many users commented on.
Based on the above, we consider these 3 groups of questions and, particularly, the comments after each question and the positive and negative points at the end of the questionnaire provided us with a very good opportunity to analyze our system’s usability and to help us find where we could improve it. Having the qualitative comments to rely upon meant that even though USE’s questions were not as numerous and specific as, for instance, those in QUIS, we could still find very specific positive and negative aspects of the system, but it was made less complex by the fact that we only had a relatively small sample size. If we had to analyze hundreds of questionnaires, it would likely be easier to dispense with so many comments, which are more difficult to treat than the numeric answer data, and instead have more and more specific questions.
5 Conclusion
Except for some problems we had with the small amount of usage data collected, the experiment could show us some interesting things about Mindboard’s utility and ease of use of Mindboard in and out of classes. The USE Questionnaire proved to be the right choice for our context as well, providing us with a lot of information about which aspects of the system we could improve and with the collections of positive and negative points so we can create a better version of Mindboard. The qualitative comments encouraged by USE were very important for us and our amount of participants, but we believe that they would be considerably more difficult to treat if we had a much larger sample, for instance in the hundreds.
As future work, we would like to conduct more experiments so we can capture more data. The main thing we would like to change is the size of the class. We plan on running the next experiment in two classes, each with 40 to 60 students, and for a longer period of time instead of only 4 classes (one of which was online only). We also plan to test the system with more teachers, not only more students. Finally, we will discard the idea of trying to record the sound of interactions and attempt to work with them only through a combination of video records and self-reporting. Regarding video, we intend to record it with multiple cameras (at least two) to reduce the problem of occlusion.
For the next versions of Mindboard we plan to add a new set of features, such as: integrate it with development environments (for programming classes) and allow users to annotate with drawings instead of only with text. We also plan to show examples of integration between Mindboard and other systems already used in education, such as Moodle [9].
References
Johnson, L., Adams, S., Cummins, M.: The NMC Horizon Report, 2014 K-12 Edn. The New Media Consortium, Texas, Austin (2014)
Johnson, L., Adams, S., Cummins, M.: The NMC Horizon Report: 2012 Higher, Education edn. The New Media Consortium, Texas, Austin (2012)
Tchounikine, P.: Computer Science and Educational Software Design. Springer, Heidelberg (2011). http://link.springer.com/10.1007/978-3-642-20003-8
Greif, I.: Computer-Supported Cooperative Work: A Book of Readings. Morgan Kaufmann Publishers, San Mateo (1988)
Norman, K., Shneiderman, B., Harper, B.: QUIS.: The Questionnaire for User Interaction Satisfaction (1987). www.cs.umd.edu/hcil/quis/
Lewis, J.R.: IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int. J. Hum.-Comput. Interact. 7(1), 57–78 (1995)
Lund, M.: Measuring Usability with the USE Questionnaire (2001). http://www.stcsig.org/usability/newsletter/0110_measuring_with_use.html
Shneiderman, B.: Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison-Wesley, Reading (1998)
Moodle. http://moodle.org
Acknowledgments
Authors would like to thank CAPES - Brazilian Federal Agency for Support and Evaluation of Graduate Education within the Ministry of Education of Brazil for financing the scholarship that supported this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Machado Faria, T.V., Pavanelli, M., Bernardes, J.L. (2016). Evaluating the Usability Using USE Questionnaire: Mindboard System Use Case. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. LCT 2016. Lecture Notes in Computer Science(), vol 9753. Springer, Cham. https://doi.org/10.1007/978-3-319-39483-1_47
Download citation
DOI: https://doi.org/10.1007/978-3-319-39483-1_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39482-4
Online ISBN: 978-3-319-39483-1
eBook Packages: Computer ScienceComputer Science (R0)