Abstract
When watching an educational video, our eyes look for relevant information related to the topic that is being explained at that particular moment. Studying the learners’ gaze behavior and particularly how it correlates with their performance, we have found a series of results, which converge to an understanding about learner behavior that is more abstracted than the use situation or the studied learning contexts. In this contribution we present “Looking Through vs. Looking At” as a generative intermediate-level body of knowledge, and show how it can construct a Strong Concept (as developed by Höök [10]) in technology enhanced learning (TEL). “Looking At”, simply put, refers to missing the relevant information because of either looking at the incorrect place or lagging behind the teacher in time. “Looking Through”, on the other hand, is the success in finding the relevant displayed information at the right moment such that the communication, through verbal and visual channels, becomes synchronous. The visual medium becomes transparent and the learning experience shifts from interacting with the material to interacting with the teacher. We define formally and show how to quantify the proposed strong concept in dyadic interaction scenarios. This concept is applicable to MOOC video interaction, but also to other learning scenarios such as (collaborative) problem solving. We put a particular emphasis on the generative aspect of the concept and demonstrate, with examples, how it can help designing solutions for interactive learning situations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alavi, H.S., Dillenbourg, P.: An ambient awareness tool for supporting supervised collaborative problem solving. IEEE Trans. Learn. Technol. 5(3), 264–274 (2012)
Allopenna, P.D., Magnuson, J.S., Tanenhaus, M.K.: Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. J. Mem. Lang. 38(4), 419–439 (1998)
Baheti, P., Williams, L., Gehringer, E., Stotts, D.: Exploring pair programming in distributed object-oriented team projects. In: Educator’s Workshop, OOPSLA, pp. 4–8. Citeseer (2002)
Cherubini, M., Dillenbourg, P.: The effects of explicit referencing in distance problem solving over shared maps. In Proceedings of the 2007 International ACM conference on Supporting group work, pp. 331–340, ACM (2007)
Duchowski, A.T., Cournia, N., Cumming, B., McCallum, D., Gramopadhye, A., Greenstein, J., Sadasivan, S., Tyrrell, R.A.: Visual deictic reference in a collaborative virtual environment. In Proceedings of the 2004 Symposium on Eye Tracking Research and Applications. ACM (2004)
Gergle, D., Clark, A.T.: See what i’m saying?: using dyadic mobile eye tracking to study collaborative reference. In: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, pp. 435–444. ACM (2011)
Gibson, J.J.: The perception of the visual world
Gregory, R.L.: Perceptions as hypotheses. Philos. Trans. Royal Soc. B: Biol. Sci. 290(1038), 181–197 (1980)
Griffin, Z.M., Bock, K.: What the eyes say about speaking. Psychol. Sci. 11(4), 274–279 (2000)
Höök, K., Löwgren, J.: Strong concepts: Intermediate-level knowledge in interaction design research. ACM Trans. Comput. Hum. Interact. 19(3), 23 (2012)
Jacob, R., Karn, K.S.: Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. Mind 2(3), 4 (2003)
Jermann, P., Nüssli, M.-A.: Effects of sharing text selections on gaze cross-recurrence and interaction quality in a pair programming task. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 1125–1134. ACM (2012)
Li, N., Kidzinski, L., Jermann, P., Dillenbourg, P.: How do in-video interactions reflect perceived video difficulty. In: The third MOOC European Stakeholders Summit, EMOOCs 2015 (2015)
Nüssli, M.-A.: Dual eye-tracking methods for the study of remote collaborative problem solving. École polytechnique fédérale de lausanne (2011)
Prieto, L.P., Alavi, H., Verma, H.: Strong technology-enhanced learning concepts. In: Accepted at the 12th European Conference on Technology Enhanced Learning (EC-TEL 2017)
Raca, M., Kidzinski, L., Dillenbourg, P.: Translating head motion into attention-towards processing of students body-language. In: Proceedings of the 8th International Conference on Educational Data Mining (2015)
Richardson, D.C., Dale, R., Kirkham, N.Z.: The art of conversation is coordination common ground and the coupling of eye movements during dialogue. Psychol. Sci. 18(5), 407–413 (2007)
Sharma, K.: Gaze analysis methods for learning analytics. Ph.d. thesis, Chapter 2. École polytechnique fédérale de lausanne (2015)
Sharma, K., Jermann, P., Nüssli, M.-A., Dillenbourg, P.: Gaze evidence for different activities in program understanding. In: 24th Annual Conference of Psychology of Programming Interest Group (2012)
Williams, L.A., Kessler, R.R.: All I really need to know about pair programming I learned in kindergarten. Commun. ACM 43(5), 108–114 (2000)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Sharma, K., Alavi, H.S., Jermann, P., Dillenbourg, P. (2017). Looking THROUGH versus Looking AT: A Strong Concept in Technology Enhanced Learning. In: Lavoué, É., Drachsler, H., Verbert, K., Broisin, J., Pérez-Sanagustín, M. (eds) Data Driven Approaches in Digital Education. EC-TEL 2017. Lecture Notes in Computer Science(), vol 10474. Springer, Cham. https://doi.org/10.1007/978-3-319-66610-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-66610-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66609-9
Online ISBN: 978-3-319-66610-5
eBook Packages: Computer ScienceComputer Science (R0)