Abstract
Extended learning environments involving system to collect data for learning analytics and to support learners will be useful for all-age education. As the first steps towards to build new learning environments, we developed a system for multimodal learning analytics using eye-tracker and EEG measurement, and inclusive user interface design for elderly learners by dual-tablet system. Multimodal learning analytics system can be supportive to extract where and how learners with varied backgrounds feel difficulty in learning process. The eye-tracker can retrieve information where the learners paid attention. EEG signals will provide clues to estimate their mental states during gazes in learning. We developed simultaneous measurement system of these multimodal responses and are trying to integrate the information to explore learning problems. A dual-tablet user interface with simplified visual layers and more intuitive operations was designed aiming to reduce the physical and mental loads of elderly learners. A prototype was developed based on a cross-platform framework, which is being refined by iterative formative evaluations participated by elderlies, in order to improve the usability of the interface design. We propose a system architecture applying the multimodal learning analytics and the user-friendly design for elderly learners, which couples learning analytics “in the wild” environment and learning analytics in controlled lab environments.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
There is an increasing need to develop learning environments for people of all ages as the average life expectancy in many countries increases. In addition, the advances and the pervasiveness of smart technologies are arguably changing what people should learn to live meaningfully as valuable participants of our society, which can deeply influence the design and development of the technology-enhanced learning environments of the future. Such environments may potentially enhance multigenerational co-creation and social activities of older adults [1].
In this paper, we present our approach and first steps towards building technology-enhanced learning environments for adults of all ages. Our approach exploits learning analytics, which involves measurement, collection, analysis and reporting of data about learners and their contexts to optimize learning environments. Although learning analytics are often intended for online learning environments such as MOOCs, they are increasingly used in hybrid learning environments such as university courses combining physically-based classrooms and digital learning tools.
Learning analytics-based hybrid learning environments can be extremely useful for supporting learners of all ages, as they can combine online, offline and in situ learning to acquire different kinds of knowledge and skills. Most of the conventional learning analytics systems, however, are inherently limited in supporting people of all ages. We thus aim at improving the usability of learning systems for all ages, enriching learning data, and providing relevant feedback to learners and instructors.
Figure 1 shows a system architecture of a learning analytics-based hybrid learning environment for university students. The system allows learners and instructors to use course management tools, e-portfolio tools, and learning material management tools via the web-based user interface. These tools generate exhaust data that are stored as learning records in the database. Learning analytics tools processes and visualizes learning records to provide feedback to learners and instructors (e.g., by displaying descriptive statistics of the usage of learning materials).
Systems based on this architecture could not support diverse users of all ages without accurately detecting and resolving a wide range of learning problems, and making their user interfaces very friendly to non-tech savvy people. In order to extended learning analytics-based hybrid learning environments for adults of all ages, we propose a novel platform based on (1) multimodal learning analytics using eye-tracker and EEG measurement, which can be used to detect and resolve learning problems with accuracy based on ‘honest’ signals, and (2) easy-to-use user interface design for non-tech savvy older adults, which exploits the ubiquity of tablet computers to enhance physical affordance of information. Firstly, our multimodal learning analytics environment can support the process to identify a text, image, etc. in a learning material that various learners perceive difficult to understand, and examine different mental states related to such perceived difficulty. Eye-tracker can be used to identify the pieces of information that the learners pay attention to. EEG signals provide clues to estimate learners’ mental states that eye trackers cannot uncover. Thus, we have developed a system to measure eye-tracking and EEG signals to support the process of narrowing down what hampers learning, based on multimodal signals. Secondly, we have developed a prototype of a cross-platform, dual-tablet user interface that enhances the physical affordance of information and supports intuitive operations. We expect that the user interface we propose would reduce the physical and mental loads of older adults in using learning analytics-based systems. We conduct iterative formative evaluation of the prototype focusing on the usability improvements for older adults. Furthermore, we discuss how these developments can be integrated to realize a novel learning support environment for adults of all ages.
2 Multimodal Sensing to Improve Educational Design
2.1 Background and Hypothesis
In order to develop learning environment for people of all ages, design of educational material is an important factor. Conventional materials might not be appropriate for every learner with different background. Extracting where such learners feel difficult in conventional materials is very useful to improve education design. High-frequency physiological data during learning process could provide more information about their mental state including difficulty feelings. In this study, we introduced a simultaneous measurement of eye gaze data and electroencephalogram signals during the self-learning process in our learning support system.
Eye tracking system could provide information on where learners paid attention in a material during learning process. In the field of educational science, eye-tracking has been widely used to evaluate and improve visual design of computer-based learning [2]. Common metrics of eye tracking data include spatial parameters which indicate where the learners focused. Several studies have shown that eye learning performance or abilities engaged in eye movements where and how long focused on an educational material [3,4,5]. These pieces of eye-tracking literature supported that eye-tracking data could be effective to extract points, which are difficult to understand for learners with less background knowledge.
Eye gaze data can indicate regions oriented attention, but there can be several candidate reasons why learners paid attention: interest, difficulty, organizing, and so on [6]. To understand the reason why the specific points were focused on, another type of physiological measurement is necessary. EEG can be measured simultaneously with eye gaze data. Integration of EEG and eye gaze measurement can be used to assess learners’ emotion and motivation during eye fixations, and determine where they felt difficult to understand.
Our investigation is based on the following hypothesis:
H1.
When learners had difficulties to understand descriptions of a textbook, they gazed at the points longer than where they could understand. However, gaze data includes responses involved in other cognitive processes.
H2.
EEG signals reflect mental state during the gaze fixation, and the signals can be used to estimate the cognitive process during fixation.
2.2 Methods
System Overview.
In this system, eye-tracking, EEG, and computer-based learning were integrated and their events were synchronized. When a participant started the learning task, eye tracking system started to measure eye gaze data and insert a start event to the EEG measurement system. Click events to advance or backward pages displayed on the screen were synchronized to EEG measurement system and eye tracking system. The learning task and automatic manipulation of the eye-tracking system were controlled by independently developed programs in Psychtoolbox [7] and Tobii pro SDK in Matlab [8]. The EEG measurement was controlled by cognionics data acquisition software suite (Cognionics Inc., San Diego, US). EEG signals and inserted events were sent from EEG headset to the data acquisition software on PC via Bluetooth.
Experiment Design.
The current experiment was approved by the ethical committee of Kyushu University. All procedures were performed in accordance with approved guidelines of the ethical committee of Kyushu University. All participants gave written informed consent in accordance with the Declaration of Helsinki before participating.
Participants performed a self-paced learning task in a dark room. They read two types of learning materials: “Correlation and Statistical testing” and “Principal component analysis and Factor analysis”. These learning materials were designed for Information Science course in Kyushu University, which include text, images, and equations. We recruited 19 participants who had little knowledge of information science and statistics and confirmed their experience of learning related knowledge of the contents presented in our experiment. The participants followed the same sequence of tasks.
The learning materials were presented in a full-screen LCD display and participants could advance or backward pages by clicking. They were asked to response difficulty and interest levels from 0 (easy, no interest) to 10 (difficult, interest) on an interactive slider scale after red each page (Fig. 2). Following the reading of one set of material, they took a quiz to confirm their understanding of the contents.
The experimenter monitored the experiment systems and measured signals outside the dark room.
Eye Tracking Measurement.
During learning and quiz, eye movements were recorded with a 150-Hz remote eye tracking system (Tobii Pro Spectrum 150 Hz, Tobii AB, Stockholm, Sweden), which was mounted on the LCD display. The system was calibrated before each beginning of the learnings and quizzes. The distance from display to eyes was kept 57 cm.
EEG Measurement.
EEG was performed across 19 channels with active electrodes (Flex sensors or Drypad sensors, Cognionics Inc.) according to the International 10–20 system. The reference electrode was placed on A1, i.e., left earlobe, and the ground electrodes were placed on near Fp1 and Fp2, i.e., left and right prefrontal sites. Electrode impedances were kept under 500 kΩ. EEGs were recorded using a Quick-20 (Cognionics Inc.) amplified at by a gain of 3, and digitalized at a sampling rate of 500 Hz.
2.3 Preliminary Results and Discussion
In this manuscript, we will display preliminary results from EEG analysis only. EEG signals removed artifacts and filtered 1–50 Hz bandpass filter were analyzed using fast fourier transform, and segmented into four frequency bands (alpha: 8–14 Hz, beta: 14–30 Hz, gamma: 30–50 Hz). We calculated the mean amplitude of each frequency band during the learning process in each page of the materials at each electrode. A topographic map of averaged EEG scalp distribution was shown in Fig. 3. This map indicated individual response during reading one difficult page (difficulty = 6.14, the most difficult page according to the evaluation of the individual participant).
The results indicated an increase of alpha amplitudes at parietal sites when the learner felt difficult in the learning process. It has been suggested that alpha activities at parietal involved in mental fatigue, drowsiness, and low vigilance levels [9]. Therefore, we can estimate that the learner was tired and could not keep his concentration on learning when he felt difficult. In this case, the page should be improved to enhance the motivation of beginners, for example, add more attractive figures.
2.4 Future Study
We developed the system to measure multimodal data and detected specific EEG responses correlated to difficulty feeling in learning. The next step is to extract points where the learners paid attention from eye gaze data, and integrate EEG signals and difficulty evaluations. This multimodal measurement can obtain information about learners’ mental states and cognitive load levels directly and objectively.
3 User Interface Design for Elderly Learners
3.1 Purposes and Problems
For the development of the learning-support system for all age, we focused on elderly learners as an initial step. We started with extending the existing e-learning system, which has been designed for university students, to provide learning content and functions for the elderly learners. Our first goal is to redesign the user interface (UI) of the new system for the elderly users who have very basic skills of operating personal computers and/or smartphones.
The conventional e-learning system are usually too complicated for elderly users. In a lecture oriented to the elderlies using the existing e-learning platform of Kyushu University, we found the participants often meet problems in operating with the web-based UI, even if most of them have the experience of operating digital devices like PCs or smartphones. The limitations and preferences of the elderly users must be considered in the UI design, which should consider reducing both the physical and mental loads of the users. Otherwise, tiredness, confusion or frustration caused by the interface will hinder the elderly users.
Physical Limitations.
The aging of body can prevent the elderly users from achieving smooth operation experience. For example, the diminution of vision brings difficulties of reading long text in small font size or identify symbols from the background with low contrast colors [10], and the dry skin of fingers makes occasional loss of responses when using touch screens.
Mental Limitations.
With the decline of cognitive capacities, elderly users would prefer simple and intuitive logic way of information presentation [11]. It is easy to get frustrated when facing unexpected situations or inconsistent information display [12].
Preference.
The preference of user interface of the elderly users can be connected with their previous experience and lifestyles. The UI design metaphors widely applied in the information systems can still be unfamiliar to the elderly users [13].
3.2 Dual-Tablet Interface Design
We propose a dual-tablet interface, which has a main screen to display and operate the main content (usually a page of the slide), and a secondary screen to deal with the supplementary information (e.g. page previews and progress) and operations (e.g. text input), as shown in Fig. 4. The interface is designed to reduce the layers of operations and fix the main visual representation (i.e. the page of slide), in order to avoid the frequent view changing and reduce the annoying overlaps of main content.
We want to at first implement the basic functions of reading learning materials, which contains the display of the slide pages, page control, bookmark, marker input, memo input and edit, etc. We modified the visual design from the web-based interface of the existing e-learning system in the following aspects:
-
Move the buttons out of the view of the page, fix the buttons’ positions and removed the auto-hide effect;
-
Use one button for only one position, and remove the second-level menu of the buttons;
-
Enlarge the sizes of the buttons and always show the text of their functions;
-
Use distinct color changes of the buttons when they are pushed.
The operations are modified from the mouse-keyboard input to finger/touch pen input to adopt the touch screen interface. However, we disabled most of the gesture operations on the main content to avoid misoperations that can frustrate the users. We also estimate the handwriting input as the main text input method.
The dual-tablet interface requires some special designs to make users clear with the correspondence of the UI components between the two devices. For example, the original web-based interface uses the same icon for different memos on the page. In the new interface, we use a coloring sequence to show more obviously that which memo on the main screen is selected and being edit on the secondary screen.
3.3 Prototype Development
To realize the data synchronization for the operations of the dual-tablet interface, we implemented a server to transmit the data between the two devices through WebSocket [14]. The latency can be a vulnerable point of this solution comparing to the direct connections between the two devices, such as Wi-Fi or Bluetooth. However, this solution can be applied to the client devices of different hardware/software platforms, so that can be easily extended to, for example, PC-tablet or tablet-smartphone interfaces. The frontend is developed with Ionic Framework (Version 3.9.2), also purposed to achieve the cross-platform expandability. For the experiments, we deployed the server on Amazon Web Services (AWS) and the clients on HUAWEI MediaPad M3 Lite 10 with Android 7.0. The structure of the prototype is shown as Fig. 5.
3.4 Preliminary Experiment and Discussion
For the evaluation of the new touch screen UI by the target users, a preliminary experiment was carried out on October 2018. Eight participants (2 females and 6 males, aged from 63 to 71) took part in the experiment. Seven of them have experience of using smartphones, and six of them have used tablets before. The participants were divided into two groups equally. After a brief introduction of prototype’s functions, each participant was asked to read the learning material (a 15-page slide about data visualization) with the prototype in around 20 min, and tried to use the prototype’s marker and memo functions. During the experiment, video records of each participant’s hand movement and operations on the tablets were taken. The event logs of the participants’ operations were also recorded in the server. After the experiment, the participants were asked about their understanding of the interface and opinions to the usability of the prototype. Open discussions were also conducted for more feedbacks. The participants followed the same sequence of tasks.
As a prototype still under development, we expected that the users may have difficulties in operating it. And the result of questionnaires shows the evaluation of the prototype’s usability is still not good enough. However, the participants were still positive to say that the meanings of UI components like buttons and their functions were easy to understand. The main problem is the operations that can make them confused or frustrated. The recorded videos were analyzed with the operation logs to find out the cases and situations that the participants misoperated or got confused (as shown in Fig. 6). The preliminary findings from the experiment are as follows.
-
1.
Counter-intuitive operations were confusing for the participants
In the original browser interface, the operation to add a marker to the page is dragging a rectangle with the mouse cursor. When applying the same operation in the touch screen interface, the participants tended to draw a straight line with the touch pen, expecting a colored bar to appear, but the result would be a thin line that can hardly be seen, which could be frustrating. Even if we explained the way of drawing a rectangle, some participants still repeated the incorrect operations.
-
2.
Some gestures are difficult for elderly users, but other gestures were acceptable
We applied long press gesture for the operations such as adding a new memo icon on the page, or deleting the pressed existing marker or memo. This design was on purpose to avoid misoperations such as deleting something with an accidental tap. However, the most participants got confused of, for example, how long they should keep pressing, should they press and move, should they long press the button or the icon, and so on. On the other hand, some of the participants tried to zoom in the page with pinch gesture when they feel the size of some text is too small to read, as they declared that they often do so with smartphones.
-
3.
Hand-writing text input is not always appropriate
Although we estimated hand-writing text input would be easily accepted by most of the users, some of the participants complained that they wanted to use the soft keyboards as they usually used in smartphones. At the same time some participants expected more intuitive hand-writing or drawing, which means they would not need to convert the drawings to digital text.
In general, we should not just give the users what we think is good for them. The characteristics of “elderly users” can be vary and changing by time.
3.5 Future Study
We are conducting iterative formative evaluations on the prototype to improve the inclusive interface design. Thus, more experiment with inclusive participants will be conducted, and the data collected in the experiments need further analysis. In the future experiments and evaluations, we want to collect more objective data such as more detailed logs, eye tracking data, and so on, for the more accurate evaluation. We are going to integrate the new user interface with the experimental learning support system to provide learning materials and functions for older adults.
4 An Architecture for Supporting Learners of All Ages
Next, we propose a system architecture that couples learning analytics “in the wild” and learning analytics in controlled lab environments, and discuss how our multimodal learning analytics and user interface prototype can fit in this architecture to realize a novel learning support environment for all ages.
As shown in Fig. 7, existing learning analytics architecture in Fig. 1 can be extended to support learners of all ages. Clearly, the user-friendly UI for all ages is a critical building block in this architecture, which provides easy access to and intuitive interactions with course management, e-portfolio, learning material management mechanisms. Their exhaust data flow into the learning database, thereby enabling learning analytics to provide learners and instructors with actionable information based on machine learning-based models. In this process, we can exploit data from auxiliary sensors such as embedded accelerometers, magnetometers, gyro sensors, and cameras as well as inexpensive eye trackers and Wi-Fi/Bluetooth devices. The data from auxiliary sensors can be used to improve the accuracy and granularity of the information that is fed back to the learners and instructors through the learning analytics tools. The knowledge base for supporting learners allows for accumulation and retrieval of relevant information for the improvement of content, layout, and structures of learning materials as well as the support of learners’ motivation and social contexts.
To provide useful and actionable feedback to learners and instructors, we analyze learners at a finer level in the LA Lab, a controlled sensor-armed lab environment for conducting in-depth learning analytics so as to update the machine learning-based models and the knowledge base of the LA “in the wild.”
In the LA Lab, multimodal sensors including EEG and eye trackers collect rich and detailed signals from learners who agreed to participate in experiential learning sessions. In addition to the learning data from the learning support system they use, the LA Lab system generates fine-grained learning data that enables in-depth learning analytics tools for experts including learning scientists, instructional designers, and cognitive scientists. These experts can also record structured and unstructured information in the knowledge base. In addition, they could potentially suggest desirable data to collect via the tools and the sensors so as to improve the usefulness of the overall system. Juxtaposed to Experts in Fig. 7 is the machine learning component (Machine Learning) that generates and updates the machine learning-based models (ML-based Models) for predicting relevant items in the knowledge base by using the learning data only.
5 Conclusion
We have presented our approach and first steps towards building technology-enhanced learning environments for adults of all ages. Our approach exploits learning analytics in hybrid learning environments and combines the lab-based in-depth learning analytics tools and the mechanisms for providing relevant and actionable feedback “in the wild.” We have developed a system to measure eye-tracking and EEG signals to support the process of narrowing down what hampers learning, based on multimodal signals. We have also developed a prototype of a dual-tablet user interface that enhances the physical affordance of information and supports intuitive operations.
At this moment, our research efforts focus on the kinds of practical learning that can lead to increased opportunities for social participation. They include acquisition of the skills to use digital technologies and/or data. We are also interested in supporting people to learn caregiving skills. Learning such skills would require acquisition of tacit knowledge and embodied skills, and thus supporting it would create exciting sets of challenges to tackle in the future. Moreover, automating feedback to learners and instructors as much as possible would improve the scalability and deployability of the proposed environment.
References
Konomi, S., et al.: Towards supporting multigenerational co-creation and social activities: extending learning analytics platforms and beyond. In: Streitz, N., Konomi, S. (eds.) DAPI 2018. LNCS, vol. 10922, pp. 82–91. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91131-1_6
Jarodzka, H., Holmqvist, K., Gruber, H.: Eye tracking in educational science: theoretical frameworks and research agendas. J. Eye Mov. Res. 10(1) (2017)
Jian, Y.-C., Ko, H.-W.: Influences of text difficulty and reading ability on learning illustrated science texts for children: an eye movement study. Comput. Educ. 113, 263–279 (2017)
Hu, Y., Wu, B., Gu, X.: An eye tracking study of high- and low-performing students in solving interactive and analytical problems. Educ. Technol. Soc. 20, 300–311 (2017)
Gegenfurtner, A., Lehtinen, E., Säljö, R.: Expertise differences in the comprehension of visualizations: a meta-analysis of eye-tracking research in professional domains (2011). https://doi.org/10.1007/s10648-011-9174-7
Alemdag, E., Cagiltay, K.: A systematic review of eye tracking research on multimedia learning. Comput. Educ. 125, 413–428 (2018)
Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R., Broussard, C.: What’s new in psychtoolbox-3. Perception 36, 1–16 (2007)
Tobii Pro .http://developer.tobiipro.com Accessed 30 Nov 2018
Borghini, G., Astolfi, L., Vecchiato, G., Mattia, D., Babiloni, F.: Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness (2014). https://www.sciencedirect.com/science/article/pii/via%3Dihub
Morris, J.M.: User-Interface design for older adults. Interact. Comput. 6, 373–393 (1994)
Al-Razgan, M.S., Al-Khalifa, H.S., Al-Shahrani, M.D., AlAjmi, H.H.: Touch-based mobile phone interface guidelines and design recommendations for elderly people: a survey of the literature. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012. LNCS, vol. 7666, pp. 568–574. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34478-7_69
Hawthorn, D.: Possible implications of aging for interface designers. Interact. Comput. 12, 507–528 (2000)
Leung, R., McGrenere, J., Graf, P.: Age-related differences in the initial usability of mobile device icons. Behav. Inf. Technol. 30, 629–642 (2011)
Pimentel, V., Nickerson, B.G.: Communicating and displaying real-time data with websocket. IEEE Internet Comput. 16, 45–53 (2012)
Acknowledgement
This work was supported by JST Mirai Grant Number 17-171024547, Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Tamura, K. et al. (2019). Integrating Multimodal Learning Analytics and Inclusive Learning Support Systems for People of All Ages. In: Rau, PL. (eds) Cross-Cultural Design. Culture and Society. HCII 2019. Lecture Notes in Computer Science(), vol 11577. Springer, Cham. https://doi.org/10.1007/978-3-030-22580-3_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-22580-3_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22579-7
Online ISBN: 978-3-030-22580-3
eBook Packages: Computer ScienceComputer Science (R0)