ABSTRACT
While Multimodal Learning Analytics (MMLA) is becoming a popular methodology in the LAK community, most educational researchers still rely on traditional instruments for capturing learning processes (e.g., click-stream, log data, self-reports, qualitative observations). MMLA has the potential to complement and enrich traditional measures of learning by providing high frequency data on learners’ behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website that provides easy access to the latest machine learning algorithms for collecting a variety of data streams from webcams: attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), hand gestures, emotions (from facial expressions and speech), and lower-level computer vision algorithms (e.g., fiducial / color tracking). This toolkit can run from any browser and does not require special hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used in a classroom context. We conclude by discussing other applications of this toolkit, potential limitations, and future steps.
- Ahuja, K., Kim, D., Xhakaj, F., Varga, V., Xie, A., Zhang, S. & Agarwal, Y. (2019). EduSense: Practical classroom sensing at Scale. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(3), 1-26Google ScholarDigital Library
- Anderson, J.R. (2002), Spanning seven orders of magnitude: a challenge for cognitive modeling. Cognitive Science, 26: 85-112.Google ScholarCross Ref
- Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.Google ScholarCross Ref
- Cukurova, M., Giannakos, M., & Martinez‐Maldonado, R. (2020). The promise and challenges of multimodal learning analytics. British Journal of Educational Technology.Google Scholar
- Di Mitri, Daniele & Schneider, Jan & Specht, Marcus & Drachsler, Hendrik. (2019). Multimodal Pipeline: A generic approach for handling multimodal data for supporting learning.Google Scholar
- Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, William T. Freeman. (2012). Eulerian Video Magnification for Revealing Subtle Changes in the World ACM Transactions on Graphics, Volume 31, Number 4 (Proc. SIGGRAPH),Google Scholar
- iMotions (2013). iMotions Biometric Research Platform. Available at: https://imotions.com/.Google Scholar
- Martinez-Maldonado, R., Mangaroska, K., Schulte, J., Elliott, D., Axisa, C., & Shum, S. B. (2020). Teacher Tracking with Integrity: What Indoor Positioning Can Reveal About Instructional Proxemics. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1), 1-27.Google ScholarDigital Library
- Ocumpaugh, J., Baker, R.S.J.D., Rodrigo, M.A. (2012). Quantitative Field Observation (QFOs) Baker-Rodrigo Observation Method Protocol (BROMP) 1.0 Training Manual version 1.0 (October 17, 2012)Google Scholar
- Saquib, N., Bose, A., George, D., & Kamvar, S. (2018). Sensei: Sensing Educational Interaction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4), 1-27.Google ScholarDigital Library
- Schneider, B., & Blikstein, P. (2014). Unraveling Students’ Interaction Around a Tangible Interface using Gesture Recognition. In J. Stamper, Z. Pardos, M. Mavrikis, B. Mclauren (Eds.), Proceedings of the 7th International Conference on Educational Data Mining (EDM’14), (pp.320-323).Google Scholar
- Schneider, B., Reilly, J. & Radu, I. (2020). Lowering Barriers for Accessing Sensor Data in Education: Lessons Learned from Teaching Multimodal Learning Analytics to Educators. Journal for STEM Educ Res 3, 91–124 . https://doi.org/10.1007/s41979-020-00027-xGoogle ScholarCross Ref
- Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2015). Augmenting the senses: a review on sensor-based learning support. Sensors, 15(2), 4097-4133.Google ScholarCross Ref
- Sharma, Kshitij & Giannakos, Michail. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning?. British Journal of Educational Technology. 10.1111/bjet.12993.Google Scholar
- Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., & André, E. (2013). The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time. In Proceedings of the 21st ACM international conference on Multimedia (pp. 831-834).Google ScholarDigital Library
- Worsley, Marcelo & Blikstein, Paulo. (2011). What's an Expert? Using Learning Analytics to Identify Emergent Markers of Expertise through Automated Speech, Sentiment and Sketch Analysis. 235-240.Google Scholar
- Zhang (2018): Zhang, Y., Qin, F., Liu, B., Qi, X., Zhao, Y., & Zhang, D. (2018). Wearable neurophysiological recordings in middle-school classroom correlate with students’ academic performance. Frontiers in Human Neuroscience, 12, 1–8Google Scholar
- Multimodal Data Collection Made Easy: The EZ-MMLA Toolkit: A data collection website that provides educators and researchers with easy access to multimodal data streams.
Recommendations
Multimodal Analytics and its Data Ecosystem
RFMIR '14: Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and ChallengesNowadays, flexible capturing techniques allow the creation of datasets rich with information on multimodal conversational behavior. Such data consist in non-verbal and conversational features playing an important role in the interaction strategies and ...
Comments