ABSTRACT
The paper explores the effects of adding audio description to an educational film on children's learning behaviour, manifested by a visual recognition task. We hypothesize that the multimodal educational setting, consisting of both verbal (film dialogue and audio description) and non-verbal (motion pictures) representations of knowledge, fosters knowledge acquisition as it provides information via multiple channels, which in turn strengthens memory retrieval. In the study we employ eye tracking methodology to examine the recognition of previously seen film material, testing whether audio description promotes recognition- rather than elimination-based decision-making in the visual recognition task. The analysis of first fixation duration and first run fixation count measures in the experimental and control groups partially confirmed our hypotheses. Children in the experimental group generally looked longer at the scenes they had seen, which supports the hypothesis that their decision was based on recognition, whereas children in the control group had longer fixations on scenes they were unfamiliar with, suggesting a decision based on elimination.
- Bourne, J., and Hurtado, C. J. 2007. From the Visual to the Verbal in Two Languages: a Contrastive Analysis of the Audio Description of The Hours in English and Spanish. In Media for All: Subtitling for the Deaf, Audio Description, and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 175--187.Google Scholar
- Braun, S. 2008. Audiodescription Research: State of the Art and Beyond. Translation Studies in the New Millennium 6, 14--30.Google Scholar
- Cabeza-Cáceres, C. 2010. Opera Audio Description at Barcelona's Liceu Theatre. In Media for All 2: New Insights into Audiovisual Translation and Media Accessibility, J. Díaz Cintas, A. Matamala, and J. Neves, Eds. Rodopi, Amsterdam, Holland, 227--237.Google Scholar
- Canham, M., and Hegarty, M. 2010. Effects of knowledge and display design on comprehension of complex graphics. Learning and Instruction 20, 2, 155--166.Google ScholarCross Ref
- Churchland, A. K., Kiani, R., and Shadlen, M. N. 2008. Decision-making with multiple alternatives. Nature Neuroscience 11, 693--702.Google ScholarCross Ref
- Cromley, J. G., Snyder-Hogan, L. E., and Luciw-Dubas, U. A. 2010. Cognitive activities in complex science text and diagrams. Contemporary Educational Psychology 35, 59--74.Google ScholarCross Ref
- Dimigen, O., Sommer, W., Hohelfeld, A., Jacobs, A. M., and Kliegl, R. 2011. Coregistration of eye movements and eeg in natural reading: Analyses and review. Journal of Experimental Psychology: General 140, 4, 552--572.Google ScholarCross Ref
- Fletcher, J. D., and Tobias, S. 2005. Cambridge handbook of multimedia learning. New York: Cambridge University Press, ch. The multimedia principle, 117--134.Google Scholar
- Frazier, G., and Coutinho-Johnson, I. 1995. The Effectiveness of Audio Description in Processing Access to Educational AV Media for Blind and Visually Impaired Students in High School. San Francisco: Audio Vision.Google Scholar
- Fryer, L. 2010. Audio Description as Audio Drama---a Practitioner's Point of View. Perspectives 18, 3, 205--213.Google ScholarCross Ref
- Gigerenzer, G. 2001. Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press, ch. The adaptive toolbox, 37--51.Google Scholar
- Glaholt, M. G., and Reingold, E. M. 2011. Eye Movement Monitoring as Process Tracing Methodology in Decision Making Research. Journal of Neuroscience, Psychology, and Economics 4, 2, 125--146.Google ScholarCross Ref
- Gold, J. I., and Shadlen, M. N. 2007. The neural basis of decision making. Annual Review of Neuroscience 30, 535--574.Google ScholarCross Ref
- Goldstein, D. G., and Gigerenzer, G. 2002. Models of ecological rational- ity: The recognition heuristic. Psychological Review 109, 75--90.Google ScholarCross Ref
- Gredler, M. E. 2004. Handbook of research on educational communications and technology. Mahwah, NJ: Erlbaum., ch. Games and simulations and their relationships to learning, 571--582.Google Scholar
- Holland, A. 2008. Audio Description in the Theatre and the Visual Arts: Images into Words. In Audiovisual Translation: Language Transfer on Screen, J. Díaz Cintas and G. Anderman, Eds. Palgrave Macmillan, Basingstoke, UK.Google Scholar
- Jacobson, M., and Kozma, R. B., Eds. 2000. Innovations in science and mathematics education: Advanced designs for technologies of learning. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
- Jensen, J. F. 1998. Interactivity: Tracing a new concept in media and communication studies. Nordicom Review 19, 185--204.Google Scholar
- Johnson, A., and Proctor, R. W. 2004. Attention: Theory and practice. Thousand Oaks, CA: Sage.Google ScholarCross Ref
- Klein, G. 1993. Decision making in action: models and methods. Norwood, CT: Ablex., ch. A recognition-primed decision (RPD) model of rapid decision making.Google Scholar
- Krejtz, I., Szarkowska, A., Krejtz, K., Walczak, K., and Duchowski, A. T. 2012. Audio Description as an Aural Guide of Children's Visual Attention: Evidence from an Eye-Tracking Study. In ETRA 2012, ETRA. Google ScholarDigital Library
- Kruger, J.-L. 2010. Audio Narration: Re-Narrativising Film. Perspectives 18, 3, 231--249.Google Scholar
- Lajoie, S. P. 2000. Computers as cognitive tools: No more walls. Mahwah, NJ: Erlbaum. Google ScholarDigital Library
- Linn, M. C., Eylon, B., and Davis, E. A. 2004. Internet environments for science education. Mahwah, NJ: Lawrence Erlbaum Associates., ch. The Knowledge Integration Perspective on Learning, 29--46.Google Scholar
- Logan, G. D. 1994. Spatial attention and the apprehension of spatial relations. Journal of Experimental Psychology: Human Perception and Performance 20, 5, 1015--1036.Google ScholarCross Ref
- Logan, G. D. 1995. Linguistic and conceptual control of visual spatial attention. Cognitive Psychology 28, 2, 103--174.Google ScholarCross Ref
- Luce, R. D., Ed. 1998. Response times: Their role in inferring elementary mental organization. New York: Oxford University Press.Google Scholar
- Matamala, A., and Orero, P. 2007. Media for All. Subtitling for the Deaf. Amsterdam and New York: Rodopi, ch. Accessible opera in Catalan: Opera for all, 201--214.Google Scholar
- Mayer, R. E., and Anderson, R. B. 1992. The Instructive Animation: Helping Students Build connections Between Words and Picturs in Multimedia Learning. Journal of Educational Psychology 84, 4, 444--452.Google ScholarCross Ref
- Mayer, R. E., and Moreno, R. 2003. A split-attention effect in multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology 90, 312--320.Google ScholarCross Ref
- Mayer, R. E., Hegarty, M., Mayer, S., and Campbell, J. 2005. When Static Media Promote Active Learning: Annotated Illustrations Versus Narrated Animations in Multimedia Instruction. Journal of Experimental Psychology: Applied 11, 4, 256--265.Google ScholarCross Ref
- Mayer, R. E., Ed. 2001. Multimedia learning. New York: Cambridge University Press. Google ScholarDigital Library
- Miyake, A., Friedman, N., Emerson, M., Witzki, A., Howerter, A., and Wager, T. 2000. The unity and diversity of executive functions and their contributions to complex frontal lobe tasks: A latent variable analysis. Cognitive Psychology 41, 49--100.Google ScholarCross Ref
- Moreno, R., and Mayer, R. E. 2007. Interactive multimodal learning environments. Educational Psychology Review 19, 309--326.Google ScholarCross Ref
- Morrison, M. 1998. Developments in Marketing Science, vol. 21. Norfolk, VA: Academy of Marketing Science., ch. A look at interactivity from a consumer perspective, 149--154.Google Scholar
- Newell, B. R., and Shanks, D. R. 2004. On the Role of Recognition in Decision Making. Journal of Experimental Psychology: Learning, Memory, and Cognition 30, 4, 923--935.Google ScholarCross Ref
- Orero, P. 2007. Sampling Audio Description in Europe. In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 111--125.Google Scholar
- Paivio, A. 1986. Mental representations: A dual coding approach. Oxford, England: Oxford University Press.Google Scholar
- Peli, E., Fine, E. M., and Labianca, A. T. 1996. Evaluating Information Provided by Audio Description. Journal of Visual Impairment and Blindness 90, 378--385.Google ScholarCross Ref
- Pleskac, T. J., and Busemeyer, J. R. 2010. Two-Stage Dynamic Signal Detection: A Theory of Choice, Decision Time, and Confidence. Psychological Review 117, 3, 864--901.Google ScholarCross Ref
- Posner, M. I., Ed. 1978. Chronometric explorations of mind. Hillsdale, NJ: Erlbaum.Google Scholar
- R Development Core Team. 2011. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.Google Scholar
- Rayner, K. 1998. Eye Movements in Reading and Informationprocessing: 20 years of research. Psychological Bulletin 85, 618--660.Google ScholarCross Ref
- Rayner, K. 2009. Eye Movements in Reading: Models and Data. Journal of Eye Movement Research 2, 5, 1--10.Google ScholarCross Ref
- Remael, A., and Vercauteren, G. 2007. Audio Describing the Exposition Phase of Films: Teaching Students What to Choose. Trans. Revista De Traductología 11, 73--93.Google Scholar
- Rieber, L. 2005. Cambridge handbook of multimedia learning. New York: Cambridge University Press, ch. Multimedia learning with games, simulations, and microworlds, 549--567.Google Scholar
- Ruz, M., and Nobre, A. C. 2008. Attention modulates initial stages of visual word processing. Journal of Cognitive Neuroscience 20, 9, 1727--1736. Google ScholarDigital Library
- Sadoski, M., and Paivio, A. 2001. Imagery and text: A dual coding theory of reading and writing. Mahwah, NJ: Erlbaum.Google Scholar
- Sadoski, M., and Paivio, A. 2004. Theoretical models and processes of reading, vol. 5. International Reading Association, ch. A dual coding theoretical model of reading, 1329--1362.Google Scholar
- Salway, A. 2007. A Corpus-Based Analysis of Audio Description. In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 154--174.Google Scholar
- Sanchez, C., and Wiley, J. 2006. An examination of the seductive details effect in terms of working memory capacity. Memory and Cognition 34, 2, 344--355.Google ScholarCross Ref
- Schmeidler, E., and Kirchner, C. 2001. Adding Audio Description: Does It Make a Difference? Journal of Visual Impairment and Blindness, 198--212.Google ScholarCross Ref
- Szarkowska, A. 2011. Text-to-Speech Audio Description: Towards Wider Availability of AD. Journal of Specialised Translation, 15, 142--162.Google Scholar
- Udo, J. P., and Fels, D. I. 2010. Universal Design on Stage: Live Audio Description for Theatrical Performances. Perspectives 18, 3, 189--203.Google Scholar
- Van Gog, T., Paas, F., van Merrinboer, J. J. G., and Witte, P. 2005. Uncovering the problem-solving process: Cued retrospective reporting versus concurrent and retrospective reporting. Journal of Experimental Psychology: Applied 11, 4, 237--244.Google ScholarCross Ref
- Yantis, S., and Egeth, H. E. 1999. On the distinction between visual salience and stimulus-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance 25, 661--676.Google ScholarCross Ref
- Yantis, S. 1998. Attention. Psychology Press, ch. Control of Visual Attention, 223--256.Google Scholar
Index Terms
- Multimodal learning with audio description: an eye tracking study of children's gaze during a visual recognition task
Recommendations
Seeing in time: an investigation of entrainment and visual processing in toddlers
ETRA '18: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & ApplicationsRecent neurophysiological and behavioral studies have provided strong evidence of rhythmic entrainment in the perceptual level in adults. The present study examines if rhythmic auditory stimulation synchronized with visual stimuli and fast tempi could ...
Effects of Spatial Congruity on Audio-Visual Multimodal Integration
Spatial constraints on multisensory integration of auditory (A) and visual (V) stimuli were investigated in humans using behavioral and electrophysiological measures. The aim was to find out whether cross-modal interactions between A and V stimuli ...
Intent capturing through multimodal inputs
HCI'13: Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IVVirtual manufacturing environments need complex and accurate 3D human-computer interaction. One main problem of current virtual environments (VEs) is the heavy overloads of the users on both cognitive and motor operational aspects. This paper ...
Comments