skip to main content
chapter

Multimodal learning analytics: assessing learners' mental state during the process of learning

Published: 01 October 2018 Publication History
First page of PDF

References

[1]
Anoto. 2016. http://pressreleases.triplepointpr.com/2016/02/07/anoto-announces-acquisition-of-we-inspire-destiny-wireless-and-pen-generations/. Accessed October 21, 2016. 346
[2]
K. E. Arnold and M. D. Pistilli. 2012. Course signals at purdue: using learning analytics to increase student success. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, ACM/SOLAR, 267--270. 337
[3]
A. Arthur, R. Lunsford, M. Wesson, and S. L. Oviatt. 2006. Prototyping novel collaborative multimodal systems: Simulation, data collection and analysis tools for the next decade. In Eighth International Conference on Multimodal Interfaces (ICMI'06), pp. 209--226. ACM, New York. 340
[4]
A. D. Baddeley and G. J. Hitch. 1974. Working memory. G. A. Bower, editor, The Psychology of Learning and Motivation: Advances in Research and Theory, Volume 8, pp. 47--89. Academic Press, New York. 357
[5]
R. S. Baker, S. K. D'Mello, M. M. Rodrigo, and A. C. Graesser. 2010. Better to be frustrated than bored: The incidence, persistence, and impact of learners' cognitive-affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4):223--241. 356
[6]
R. S. Baker, A. T. Corbett, K. R. Koedinger, and A Z. Wagner. 2004. Off-task behavior in the cognitive tutor classroom: When students game the system. Proceedings of the SIGCHI Conference on Human factors in Computing Systems (CHI), pp. 383--390. ACM, New York. 337
[7]
T. Baltrusaitis, P. Robinson, and L. Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, New York. 356
[8]
L. Batrinca, G. Stratou, A. Shapiro, L.-P. Morency, and S. Scherer. 2013. Cicero: Towards a multimodal virtual audience platform for public speaking training. Intelligent Virtual Agents, 116--128. 363
[9]
P. Boersma. 2002. Praat, a system for doing phonetics by computer. Glot International, 5(9/10):341--345. 362
[10]
L. Chen, G. Feng, J. Joe, C. Leong, C. Kitchen, C. and Lee. 2014a. Toward automated assessment of public speaking skills using multimodal cues. In Proceedings of the International Conference on Multimodal Interaction (ICMI), pp. 200--203. ACM, New York. 345, 347, 349, 351
[11]
L. Chen, C. Leong, G. Feng, and C. Lee. 2014b. Using multimodal cues to analyze MLA'14 Oral Presentation Quality Corpus: Presentation delivery and slides quality. In Proceedings of the 2014 ACM Grand Challenge Workshop on Multimodal Learning Analytics, pp. 45--52. ACM, New York. 347, 350, 351
[12]
L. Chen, C. Leong, G. Feng, C. Lee, and S. Somasundaran. 2015. Utilizing multimodal cues to automatically evaluate public speaking performance. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, pp. 394--400. 343, 347
[13]
P. C. H. Cheng, and H. Rojas-Anaya. 2007. Measuring mathematical formula writing competence: An application of graphical protocol analysis. In D. S. McNamara and J. G. Trafton editors, Proceedings of the 29th Conference of the Cognitive Science Society, pp. 869--874. Cognitive Science Society, Austin, TX. 352, 357
[14]
P. C. H. Cheng, and H. Rojas-Anaya. 2008. A graphical chunk production model: Evaluation using graphical protocol analysis with artificial sentences. In B. C. Love, K. McRae and V. M. Sloutsky editors, Proceedings of 30th Annual Conference of the Cognitive Science Society, pp. 1972--1977. Cognitive Science Society, Austin, TX. 352, 357
[15]
H. Clark and S. Brennan. 1991. Grounding in communication. In Resnick, Levine and Teasley, editors, Perspectives on Socially Shared Cognition, pp. 127--149, APA, Washington. 357
[16]
H. Clark, and E. F. Schaefer. 1989. Contributing to discourse. Cognitive Science, 13: 259--294. 357
[17]
P. Cohen and S. Oviatt. 2017. Multimodal speech and pen interfaces. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling and Common Modality Combinations. Morgan & Claypool Publishers, San Rafael, CA. 332
[18]
A. Comblain. 1994. Working memory in Down's Syndrome: Training the rehearsal strategy. Down's Syndrome Research and Practice, 2(3):123--126. 359
[19]
S. D'Mello, S. Craig, A. Witherspoon, B. McDaniel, and A. Graesser 2008. Automatic detection of learner's affect from conversational cues. User Modeling and User Adapted Interaction, 18(1-2):45--80. 356
[20]
S. K. D'Mello and A. C. Graesser. 2010. Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features. User Modeling and User-Adapted Interaction, 20(2):147--187. 356
[21]
S. K. D'Mello and J. Kory. 2015. A Review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys, 47(3), article 43, ACM, New York. 352, 354, 355
[22]
F. Dominguez, V. Echeverria, K. Chiluiza, and X. Ochoa. 2015. Multimodal selfies: Designing a multimodal recording device for students in traditional classrooms. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 567--574. 346, 347
[23]
P. Ekman and W. V. Friesen. 1978. Facial Action Coding System, Consulting Psychologists Press, Palo Alto, CA. 355
[24]
K. Ericsson, N. Charness, P. Feltovich, and R. Hoffman, editors. 2006. The Cambridge Handbook of Expertise and Expert Performance. Cambridge University Press, New York. 349
[25]
A. Ezen-Can, J. F. Grafsgaard, J. C. Lester, and K. E. Boyer. 2015. Classifying student dialogue acts with multimodal learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge, pp. 280--289. ACM, New York. 348
[26]
Fraunhofer. 2016. http://www.iis.fraunhofer.de/en/ff/bsy/tech/bildanalyse/shore-gesichtsdetektion.html. Accessed October 22, 2016. 356
[27]
J. H. Friedman. 2002. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367--378. 351
[28]
S. Goldin-Meadow, H. Nusbaum, S. Kelly, and S. Wagner. 2001. Explaining math: Gesturing lightens the load. Psychological Science, 12(6):516--522. 359
[29]
J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe, and J. C. Lester. 2013. Automatically recognizing facial expression: Predicting engagement and frustration. Proceedings of the Sixth International Conference on Educational Data Mining, pp. 43--50. 355
[30]
J. F. Grafsgaard, J. B. Wiggins, A. K. Vail, K. E. Boyer, E. N. Wiebe, and J. C. Lester. 2014. The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In Proceedings of the ACM International Conference on Multimodal Interaction, pp. 42--49. ACM, New York. 352, 354, 355, 356
[31]
N. Hill and W. Schneider. 2006. Brain changes in the development of expertise: Neuroanatomical and neurophysiological evidence about skill-based adaptations. In Ericsson, Charness, Feltovich & Hoffman, editors, The Cambridge Handbook of Expertise and Expert Performance. Cambridge University Press, 37:653--682. 360
[32]
K. Hinckley. 2017. A background perspective on touch as a multimodal (and multi-sensor) construct. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chap 4. Morgan & Claypool Publishers, San Rafael, CA. 332
[33]
K. James and L. Engelhardt. 2012. The effects of handwriting experience on functional brain development in pre-literate children. Trends in Neuroscience and Education, 1: 32--42. 359
[34]
K. James, J. Lester, S. Oviatt, K. Cheng, and D. Schwartz. 2017a. Perspectives on learning with multimodal technologies. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 13. Morgan & Claypool Publishers, San Rafael, CA. 365
[35]
K. James, S. Vinci-Booher, and F. Munoz-Rubke. 2017b. The impact of multimodal-multisensory learning on human performance and brain activation patterns. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 2. Morgan & Claypool Publishers, San Rafael, CA. 359
[36]
B. Jee, D. Gentner, K. Forbus, B. Sageman, and D. Uttal. 2009. Drawing on experience: Use of sketching to evaluate knowledge of spatial scientific concepts. In Proceedings of the 31st Conference of the Cognitive Science Society, Amsterdam. 350
[37]
B. Jee, D. Gentner, D. Uttal, B. Sageman, K. Forbus, C. Manduca, C. Ormand, T. Shipley, and B. Tikoff. 2014. Drawing on experience: How domain knowledge is reflected in sketches of scientific structures and processes. Research in Science Education, 44(6), 859--883 Springer, Dordrecht. 350
[38]
A. Jones, A. Kendira, D. Lenne, T. Gidel, and C. Moulin. 2011. The tatin-pic project: A multimodal collaborative work environment for preliminary design. In Proceedings of the Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 154--161. 346
[39]
D. Kahneman, P. Slovic, and A. Tversky, editors. 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, New York. 364
[40]
M. Kapoor and R. Picard. 2005. Multimodal affect recognition in learning environments. In Proceedings of the ACM Multimedia Conference, ACM, New York. 352, 354
[41]
A. Katsamanis, V. Pitsikalis, S. Theodorakis, and P. Maragos. 2017. Multimodal gesture recognition. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 11. Morgan & Claypool Publishers, San Rafael, CA. 332
[42]
R. F. Kizilcec, C. Piech, and E. Schneider. 2013. Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In Proceedings of the Third International Conference on Learning Analytics and Knowledge, ACM, New York, pp. 170--179. 337
[43]
S. Kopp and K. Bergmann. 2017. Using cognitive models to understand multimodal processes: The case for speech and gesture production. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 6. Morgan & Claypool Publishers, San Rafael, CA. 332
[44]
K. Kurihara, M. Goto, J. Ogata, Y. Matsusaka, and T. Igarashi. 2007. Presentation sensei: A presentation training system using speech and image processing. In Proceedings of the ACM International Conference on Multimodal Interfaces, pp. 358--365. ACM, New York. 363
[45]
J. Leitner, J. Powell, P. Brandl, T. Seifried, M. Haller, B, Doray, and P. To. 2009. FLUX- A tilting multi-touch and pen-based surface. In Proceedings of the International Conference on Human Factors in Computing (CHI). ACM, New York. 346
[46]
C. Leong, L. Chen, G. Feng, C. Lee, and M. Mulholland. 2015. Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits. Proceedings of the ACM International Conference on Multimodal Interaction, pp. 547--556. ACM, New York. 344
[47]
B. Lindblom. 1990. Explaining phonetic variation: A sketch of the H and H theory. In W. Hardcastle and A. Marchal, editors, Speech Production and Speech Modeling, pp. 403--439. Kluwer, Dordrecht. 357
[48]
G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. 2011. The Computer Expression Recognition Toolbox (CERT). In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 298--305. 355
[49]
M. Longcamp, C. Boucard, J.-C. Gilhodes, J.-L. Anton, M. Roth, B. Nazarian, and J.-L. Velay. 2008. Learning through hand or typewriting influences visual recognition of new graphic shapes: Behavioral and functional imaging evidence. Journal of Cognitive Neuroscience, 20(5):802--815. 359
[50]
A. R. Luria. 1961. The Role of Speech in the Regulation of Normal and Abnormal Behavior. Oxford, Liveright. 338, 359
[51]
L. Martin and D. Schwartz. 2009. Prospective adaptation in the use of external representations. Cognition and Instruction, 27(4):370--400. 350
[52]
R. Martinez Maldonado, Y. Dimitriadis, J. Kay, K. Yacef, and M.-T. Edbauer. 2012. Orchestrating a multi-tabletop classroom: From activity design to enactment and reflection. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 119--128. ACM, New York. 346
[53]
R. E. Mayer and R. Moreno. 1998. A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90(2):312--320. 358
[54]
L. Morency, J. Whitehill, and J. Movellan. 2010. Monocular head pose estimation using generalized adaptive view-based appearance model. Image and Vision Computing, 28(5):754--761. 356
[55]
S. Y. Mousavi, R. Low, and J. Sweller. 1995. Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2):319--334. 358
[56]
K. Nakamura, W-J. Kuo, F. Pegado, L. Cohen, O. Tzeng, and S. Dehaene. 2012. Universal brain systems for recognizing word shapes and handwriting gestures during reading. In Proceedings of the National Academy of Science, 109(50):20762--20767. 359
[57]
Y. I. Nakano and R. Ishii. 2010. Estimating user's engagement from eye-gaze behaviors in human-agent conversations. In Proceedings of the International Intelligent User Interface Conference, pp. 139--148. ACM, New York. 354
[58]
A.-T. Nguyen, W. Chen, and M. Rauterberg. 2012. Online feedback system for public speakers. IEEE Symposium on e-Learning, e-Management and e-Services. Citeseer, Malaysia, 46--51. 343, 363
[59]
L. S. Nguyen, D. Frauendorfer, M. S. Mast, and D. Gatica-Perez. 2014. Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE Transactions on Multimedia, 16(4):1018--1031. 363
[60]
X. Ochoa. Description of the presentation quality dataset and coded documents, unpublished manuscript. 343
[61]
X. Ochoa, K. Chiluiza, G. Méndez, G. Luzardo, B. Guamán, and J. Castells. 2013. Expertise estimation based on simple multimodal features. In Proceedings of the 15th ACM International Conference on Multimodal Interaction, pp. 583--590. ACM, New York, NY. 350
[62]
S. L. Oviatt. 2013a. Problem solving, domain expertise, and learning: Ground-truth performance results for Math Data Corpus. In International Data-Driven Grand Challenge Workshop on Multimodal Learning Analytics. ACM, New York. 340, 341
[63]
S. Oviatt. 2013b. The Design of Future Educational Interfaces. Routledge Press, New York, NY. 333, 356
[64]
S. Oviatt. 2017. Theoretical foundations of multimodal interfaces and systems. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling and Common Modality Combinations, Chapter 1. Morgan & Claypool Publishers, San Rafael, CA. 356, 358, 359
[65]
S. L. Oviatt, A. Arthur, Y. Brock, and J. Cohen. 2007. Expressive pen-based interfaces for math education. In Chinn, Erkens and Puntambekar, editors, Proceedings of the Conference on Computer-Supported Collaborative Learning, International Society of the Learning Sciences, 8(2):569--578. 350, 358
[66]
S. Oviatt and A. Cohen. 2013. Written and multimodal representations as predictors of expertise and problem-solving success in mathematics. In Proceedings of the 15th ACM International Conference on Multimodal Interaction, pp. 599--606. ACM, New York. 339, 341, 350, 351, 352
[67]
S. L. Oviatt and A. Cohen. 2014. Written activity, representations, and fluency as predictors of domain expertise in mathematics. In Proceedings of the 16th ACM International Conference on Multimodal Interaction. ACM, New York. 352, 353, 357
[68]
S. L. Oviatt, A. O. Cohen, N. Weibel, K. Hang, and K. Thompson. 2014. Multimodal learning analytics data resources: Description of math data corpus and coded documents. In Third International Data-Driven Grand Challenge Workshop on Multimodal Learning Analytics. ACM Press, NewYork. 341, 361
[69]
S. L. Oviatt and P. R. Cohen. 2015. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces. Human-Centered Interfaces Synthesis series J. Carrol, editor. Morgan & Claypool, San Rafael, CA. 335
[70]
S. L. Oviatt, R. Coulston, and R. Lunsford. 2004. When do we interact multimodally? Cognitive load and multimodal communication patterns. In Proceedings of the International Conference on Multimodal Interfaces, pp. 129--136. ACM, New York. 358
[71]
S. L. Oviatt, K. Hang, J. Zhou, and F. Chen. 2015. Spoken interruptions signal productive problem solving and domain expertise in mathematics. In Proceedings of the 17th ACM International Conference on Multimodal Interaction. ACM, New York. 350, 351, 353, 358
[72]
S. L. Oviatt, K. Hang, J. Zhou, K. Yu, and F. Chen. Aptil 2016. Dynamic handwriting signal features predict domain expertise. Incaa Designs Technical Report 52016. 349, 350, 353, 360, 361
[73]
C. Peters, S. Asteriadis, and K. Karpouzis. 2010. Investigating shared attention with a virtual agent using a gaze-based interface. Journal on Multimodal User Interfaces, 3(1-2):119--130. 354
[74]
G. Potamianos, E. Marcheret, Y. Mroueh, V. Goel, A. Koumbaroulis, A. Vartholomaios, and S. Thermos. 2017. Audio and visual modality combination in speech processing applications. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 12. Morgan & Claypool Publishers, San Rafael, CA. 332
[75]
A. Purandare and D. Litman. 2008. Content-learning correlations in spoken tutoring dialogs at word, turn, and discourse levels. In Proceedings of the 21st International FLAIRS Conference, Coconut Grove, FL. 349, 358
[76]
P. Qvarfordt. 2017. Gaze-informed multimodal interaction. In S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos and A. Krueger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, Chapter 9. Morgan & Claypool Publishers, San Rafael, CA. 332
[77]
M. Raca and P. Dillenbourg. 2013. System for assessing classroom attention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge, pp. 265--269. ACM, New York. 345, 354
[78]
M. Raca and P. Dillenbourg. 2014. Holistic analysis of the classroom. In Proceedings of the ACM International Data-Driven Grand Challenge workshop on Multimodal Learning Analytics, pp. 13--20. ACM, New York. 346
[79]
C. Rich, B. Ponsler, A. Holroyd, and C. Sidner. 2010. Recognizing engagement in human-robot interaction. In Proceedings of the International Conference on Human-Robot Interaction, pp, 375--382. ACM, New York. 354
[80]
J. Sanghvi, A. Pereira, G. Castellano, P. McOwan, I. Leite, and A. Paiva. 2011. Automatic analysis of affective postures and body motion to detect engagement with a game companion. In Proceedings of the International Conference on Human-Robot Interaction. ACM, New York. 354
[81]
J. Schneider, D. Börner, P. Van Rosmalen, and M. Specht. 2015. Presentation trainer, your public speaking multimodal coach. In Proceedings of the ACMon International Conference on Multimodal Interaction, pp. 539--546. ACM, New York. 363
[82]
L. M. Schreiber, G. D. Paul, and L. R. Shibley. 2012. The development and test of the public speaking competence rubric. Communication Education. 61(3):205--233. 345
[83]
A. Serrano-Laguna and B. Fernández-Manjón. 2014. Applying learning analytics to simplify serious games deployment in the classroom. Global Engineering Education Conference (EDUCON), pp. 872--877. IEEE, New York. 337
[84]
N. Singer. 2014. With tech taking over schools, worries rise. New York Times. September 15, 2014. 363
[85]
S. Tindall-Ford, P. Chandler, and J. Sweller. 1997. When two sensory modes are better than one. Journal of Experimental Psychology: Applied, 3(3):257--287. 358
[86]
Visage Technologies. 2016. http://visagetechnologies.com. Accessed October 22, 2016. 356
[87]
L. Vygotsky. 1962. Thought and Language. MIT Press, Cambridge, MA. (Translated by E. Hanfmann and G. Vakar from 1934 original). 338, 359
[88]
B. Woolf, W. Burleson, I. Arroyo, T. Dragon, D. Cooper, and R. Picard. 2009. Affect-aware tutors: recognising and responding to student affect. International Journal of Learning Technology, 4(3-4):129--164. 356
[89]
M. Worsley and P. Blikstein. 2010. Toward the development of learning analytics: Student speech as an automatic and natural form of assessment. Annual Educational Research Association Conference. 349, 350
[90]
B. Xiao, R. Lunsford, R. Coulston, M. Wesson, and S. L. Oviatt. 2003. Modeling multimodal integration patterns and performance in seniors: Toward adaptive processing of individual differences. In Proceedings of the Fifth International Conference on Multimodal Interfaces (ICMI). ACM, New York. 359
[91]
Z. Zeng, M. Pantic, G. Roisman, and T. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:39--58. 355
[92]
Z. Zhang. 2012. Microsoft Kinect sensor and its effect. IEEE Multimedia, 19(2):4--10. 343
[93]
J. Zhou, K. Hang, S. Oviatt, K. Yu, and F. Chen. 2014. Combining empirical and machine learning techniques to predict math expertise using pen signal features. In Proceedings of the Second International Data-Driven Grand Challenge Workshop on Multimodal Learning Analytics, pp. 29--36. ACM, New York. 353, 360

Cited By

View all
  • (2025)Qualitative Parameter Triangulation: A Conceptual and Methodological Framework for Event-Based Temporal ModelsProceedings of the 15th International Learning Analytics and Knowledge Conference10.1145/3706468.3706538(537-546)Online publication date: 3-Mar-2025
  • (2023)Multimodal Predictive Student Modeling with Multi-Task Transfer LearningLAK23: 13th International Learning Analytics and Knowledge Conference10.1145/3576050.3576101(333-344)Online publication date: 13-Mar-2023
  • (2023)Multisensory Interaction and Analytics to Enhance Smart Learning Environments: A Systematic Literature ReviewIEEE Transactions on Learning Technologies10.1109/TLT.2023.324321016:3(414-430)Online publication date: 1-Jun-2023
  • Show More Cited By
  1. Multimodal learning analytics: assessing learners' mental state during the process of learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Books
    The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 2
    October 2018
    2034 pages
    ISBN:9781970001716
    DOI:10.1145/3107990

    Publisher

    Association for Computing Machinery and Morgan & Claypool

    Publication History

    Published: 01 October 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Chapter

    Appears in

    ACM Books

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)41
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 03 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Qualitative Parameter Triangulation: A Conceptual and Methodological Framework for Event-Based Temporal ModelsProceedings of the 15th International Learning Analytics and Knowledge Conference10.1145/3706468.3706538(537-546)Online publication date: 3-Mar-2025
    • (2023)Multimodal Predictive Student Modeling with Multi-Task Transfer LearningLAK23: 13th International Learning Analytics and Knowledge Conference10.1145/3576050.3576101(333-344)Online publication date: 13-Mar-2023
    • (2023)Multisensory Interaction and Analytics to Enhance Smart Learning Environments: A Systematic Literature ReviewIEEE Transactions on Learning Technologies10.1109/TLT.2023.324321016:3(414-430)Online publication date: 1-Jun-2023
    • (2023)Understanding Flow Experience in Video Learning by Multimodal DataInternational Journal of Human–Computer Interaction10.1080/10447318.2023.218187840:12(3144-3158)Online publication date: 23-Feb-2023
    • (2022)Automatic Detection of Reflective Thinking in Mathematical Problem Solving Based on Unconstrained Bodily ExplorationIEEE Transactions on Affective Computing10.1109/TAFFC.2020.297806913:2(944-957)Online publication date: 1-Apr-2022
    • (2022)CDM4MMLA: Contextualized Data Model for MultiModal Learning AnalyticsThe Multimodal Learning Analytics Handbook10.1007/978-3-031-08076-0_9(205-229)Online publication date: 31-May-2022
    • (2021)I Know What You Know: What Hand Movements Reveal about Domain ExpertiseACM Transactions on Interactive Intelligent Systems10.1145/342304911:1(1-26)Online publication date: 15-Mar-2021
    • (2021)Exploring students' cognitive and affective states during problem solving through multimodal data: Lessons learned from a programming activityJournal of Computer Assisted Learning10.1111/jcal.1259038:1(40-59)Online publication date: 6-Sep-2021
    • (2021)Challenges and opportunities of multimodal data in human learning: The computer science students' perspectiveJournal of Computer Assisted Learning10.1111/jcal.1254237:4(1030-1047)Online publication date: 2-Mar-2021
    • (2021)Methodological Considerations for Understanding Students’ Problem Solving Processes and Affective Trajectories During Game-Based Learning: A Data Fusion ApproachHCI in Games: Serious and Immersive Games10.1007/978-3-030-77414-1_15(201-215)Online publication date: 3-Jul-2021
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media