skip to main content
10.1145/2338676.2338694acmconferencesArticle/Chapter ViewAbstractPublication PagessapConference Proceedingsconference-collections
research-article

Multimodal learning with audio description: an eye tracking study of children's gaze during a visual recognition task

Published:03 August 2012Publication History

ABSTRACT

The paper explores the effects of adding audio description to an educational film on children's learning behaviour, manifested by a visual recognition task. We hypothesize that the multimodal educational setting, consisting of both verbal (film dialogue and audio description) and non-verbal (motion pictures) representations of knowledge, fosters knowledge acquisition as it provides information via multiple channels, which in turn strengthens memory retrieval. In the study we employ eye tracking methodology to examine the recognition of previously seen film material, testing whether audio description promotes recognition- rather than elimination-based decision-making in the visual recognition task. The analysis of first fixation duration and first run fixation count measures in the experimental and control groups partially confirmed our hypotheses. Children in the experimental group generally looked longer at the scenes they had seen, which supports the hypothesis that their decision was based on recognition, whereas children in the control group had longer fixations on scenes they were unfamiliar with, suggesting a decision based on elimination.

References

  1. Bourne, J., and Hurtado, C. J. 2007. From the Visual to the Verbal in Two Languages: a Contrastive Analysis of the Audio Description of The Hours in English and Spanish. In Media for All: Subtitling for the Deaf, Audio Description, and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 175--187.Google ScholarGoogle Scholar
  2. Braun, S. 2008. Audiodescription Research: State of the Art and Beyond. Translation Studies in the New Millennium 6, 14--30.Google ScholarGoogle Scholar
  3. Cabeza-Cáceres, C. 2010. Opera Audio Description at Barcelona's Liceu Theatre. In Media for All 2: New Insights into Audiovisual Translation and Media Accessibility, J. Díaz Cintas, A. Matamala, and J. Neves, Eds. Rodopi, Amsterdam, Holland, 227--237.Google ScholarGoogle Scholar
  4. Canham, M., and Hegarty, M. 2010. Effects of knowledge and display design on comprehension of complex graphics. Learning and Instruction 20, 2, 155--166.Google ScholarGoogle ScholarCross RefCross Ref
  5. Churchland, A. K., Kiani, R., and Shadlen, M. N. 2008. Decision-making with multiple alternatives. Nature Neuroscience 11, 693--702.Google ScholarGoogle ScholarCross RefCross Ref
  6. Cromley, J. G., Snyder-Hogan, L. E., and Luciw-Dubas, U. A. 2010. Cognitive activities in complex science text and diagrams. Contemporary Educational Psychology 35, 59--74.Google ScholarGoogle ScholarCross RefCross Ref
  7. Dimigen, O., Sommer, W., Hohelfeld, A., Jacobs, A. M., and Kliegl, R. 2011. Coregistration of eye movements and eeg in natural reading: Analyses and review. Journal of Experimental Psychology: General 140, 4, 552--572.Google ScholarGoogle ScholarCross RefCross Ref
  8. Fletcher, J. D., and Tobias, S. 2005. Cambridge handbook of multimedia learning. New York: Cambridge University Press, ch. The multimedia principle, 117--134.Google ScholarGoogle Scholar
  9. Frazier, G., and Coutinho-Johnson, I. 1995. The Effectiveness of Audio Description in Processing Access to Educational AV Media for Blind and Visually Impaired Students in High School. San Francisco: Audio Vision.Google ScholarGoogle Scholar
  10. Fryer, L. 2010. Audio Description as Audio Drama---a Practitioner's Point of View. Perspectives 18, 3, 205--213.Google ScholarGoogle ScholarCross RefCross Ref
  11. Gigerenzer, G. 2001. Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press, ch. The adaptive toolbox, 37--51.Google ScholarGoogle Scholar
  12. Glaholt, M. G., and Reingold, E. M. 2011. Eye Movement Monitoring as Process Tracing Methodology in Decision Making Research. Journal of Neuroscience, Psychology, and Economics 4, 2, 125--146.Google ScholarGoogle ScholarCross RefCross Ref
  13. Gold, J. I., and Shadlen, M. N. 2007. The neural basis of decision making. Annual Review of Neuroscience 30, 535--574.Google ScholarGoogle ScholarCross RefCross Ref
  14. Goldstein, D. G., and Gigerenzer, G. 2002. Models of ecological rational- ity: The recognition heuristic. Psychological Review 109, 75--90.Google ScholarGoogle ScholarCross RefCross Ref
  15. Gredler, M. E. 2004. Handbook of research on educational communications and technology. Mahwah, NJ: Erlbaum., ch. Games and simulations and their relationships to learning, 571--582.Google ScholarGoogle Scholar
  16. Holland, A. 2008. Audio Description in the Theatre and the Visual Arts: Images into Words. In Audiovisual Translation: Language Transfer on Screen, J. Díaz Cintas and G. Anderman, Eds. Palgrave Macmillan, Basingstoke, UK.Google ScholarGoogle Scholar
  17. Jacobson, M., and Kozma, R. B., Eds. 2000. Innovations in science and mathematics education: Advanced designs for technologies of learning. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google ScholarGoogle Scholar
  18. Jensen, J. F. 1998. Interactivity: Tracing a new concept in media and communication studies. Nordicom Review 19, 185--204.Google ScholarGoogle Scholar
  19. Johnson, A., and Proctor, R. W. 2004. Attention: Theory and practice. Thousand Oaks, CA: Sage.Google ScholarGoogle ScholarCross RefCross Ref
  20. Klein, G. 1993. Decision making in action: models and methods. Norwood, CT: Ablex., ch. A recognition-primed decision (RPD) model of rapid decision making.Google ScholarGoogle Scholar
  21. Krejtz, I., Szarkowska, A., Krejtz, K., Walczak, K., and Duchowski, A. T. 2012. Audio Description as an Aural Guide of Children's Visual Attention: Evidence from an Eye-Tracking Study. In ETRA 2012, ETRA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Kruger, J.-L. 2010. Audio Narration: Re-Narrativising Film. Perspectives 18, 3, 231--249.Google ScholarGoogle Scholar
  23. Lajoie, S. P. 2000. Computers as cognitive tools: No more walls. Mahwah, NJ: Erlbaum. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Linn, M. C., Eylon, B., and Davis, E. A. 2004. Internet environments for science education. Mahwah, NJ: Lawrence Erlbaum Associates., ch. The Knowledge Integration Perspective on Learning, 29--46.Google ScholarGoogle Scholar
  25. Logan, G. D. 1994. Spatial attention and the apprehension of spatial relations. Journal of Experimental Psychology: Human Perception and Performance 20, 5, 1015--1036.Google ScholarGoogle ScholarCross RefCross Ref
  26. Logan, G. D. 1995. Linguistic and conceptual control of visual spatial attention. Cognitive Psychology 28, 2, 103--174.Google ScholarGoogle ScholarCross RefCross Ref
  27. Luce, R. D., Ed. 1998. Response times: Their role in inferring elementary mental organization. New York: Oxford University Press.Google ScholarGoogle Scholar
  28. Matamala, A., and Orero, P. 2007. Media for All. Subtitling for the Deaf. Amsterdam and New York: Rodopi, ch. Accessible opera in Catalan: Opera for all, 201--214.Google ScholarGoogle Scholar
  29. Mayer, R. E., and Anderson, R. B. 1992. The Instructive Animation: Helping Students Build connections Between Words and Picturs in Multimedia Learning. Journal of Educational Psychology 84, 4, 444--452.Google ScholarGoogle ScholarCross RefCross Ref
  30. Mayer, R. E., and Moreno, R. 2003. A split-attention effect in multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology 90, 312--320.Google ScholarGoogle ScholarCross RefCross Ref
  31. Mayer, R. E., Hegarty, M., Mayer, S., and Campbell, J. 2005. When Static Media Promote Active Learning: Annotated Illustrations Versus Narrated Animations in Multimedia Instruction. Journal of Experimental Psychology: Applied 11, 4, 256--265.Google ScholarGoogle ScholarCross RefCross Ref
  32. Mayer, R. E., Ed. 2001. Multimedia learning. New York: Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Miyake, A., Friedman, N., Emerson, M., Witzki, A., Howerter, A., and Wager, T. 2000. The unity and diversity of executive functions and their contributions to complex frontal lobe tasks: A latent variable analysis. Cognitive Psychology 41, 49--100.Google ScholarGoogle ScholarCross RefCross Ref
  34. Moreno, R., and Mayer, R. E. 2007. Interactive multimodal learning environments. Educational Psychology Review 19, 309--326.Google ScholarGoogle ScholarCross RefCross Ref
  35. Morrison, M. 1998. Developments in Marketing Science, vol. 21. Norfolk, VA: Academy of Marketing Science., ch. A look at interactivity from a consumer perspective, 149--154.Google ScholarGoogle Scholar
  36. Newell, B. R., and Shanks, D. R. 2004. On the Role of Recognition in Decision Making. Journal of Experimental Psychology: Learning, Memory, and Cognition 30, 4, 923--935.Google ScholarGoogle ScholarCross RefCross Ref
  37. Orero, P. 2007. Sampling Audio Description in Europe. In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 111--125.Google ScholarGoogle Scholar
  38. Paivio, A. 1986. Mental representations: A dual coding approach. Oxford, England: Oxford University Press.Google ScholarGoogle Scholar
  39. Peli, E., Fine, E. M., and Labianca, A. T. 1996. Evaluating Information Provided by Audio Description. Journal of Visual Impairment and Blindness 90, 378--385.Google ScholarGoogle ScholarCross RefCross Ref
  40. Pleskac, T. J., and Busemeyer, J. R. 2010. Two-Stage Dynamic Signal Detection: A Theory of Choice, Decision Time, and Confidence. Psychological Review 117, 3, 864--901.Google ScholarGoogle ScholarCross RefCross Ref
  41. Posner, M. I., Ed. 1978. Chronometric explorations of mind. Hillsdale, NJ: Erlbaum.Google ScholarGoogle Scholar
  42. R Development Core Team. 2011. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.Google ScholarGoogle Scholar
  43. Rayner, K. 1998. Eye Movements in Reading and Informationprocessing: 20 years of research. Psychological Bulletin 85, 618--660.Google ScholarGoogle ScholarCross RefCross Ref
  44. Rayner, K. 2009. Eye Movements in Reading: Models and Data. Journal of Eye Movement Research 2, 5, 1--10.Google ScholarGoogle ScholarCross RefCross Ref
  45. Remael, A., and Vercauteren, G. 2007. Audio Describing the Exposition Phase of Films: Teaching Students What to Choose. Trans. Revista De Traductología 11, 73--93.Google ScholarGoogle Scholar
  46. Rieber, L. 2005. Cambridge handbook of multimedia learning. New York: Cambridge University Press, ch. Multimedia learning with games, simulations, and microworlds, 549--567.Google ScholarGoogle Scholar
  47. Ruz, M., and Nobre, A. C. 2008. Attention modulates initial stages of visual word processing. Journal of Cognitive Neuroscience 20, 9, 1727--1736. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Sadoski, M., and Paivio, A. 2001. Imagery and text: A dual coding theory of reading and writing. Mahwah, NJ: Erlbaum.Google ScholarGoogle Scholar
  49. Sadoski, M., and Paivio, A. 2004. Theoretical models and processes of reading, vol. 5. International Reading Association, ch. A dual coding theoretical model of reading, 1329--1362.Google ScholarGoogle Scholar
  50. Salway, A. 2007. A Corpus-Based Analysis of Audio Description. In Media for All: Subtitling for the Deaf, Audio Description and Sign Language, J. Díaz Cintas, P. Orero, and A. Remael, Eds. Rodopi, Amsterdam, Holland, 154--174.Google ScholarGoogle Scholar
  51. Sanchez, C., and Wiley, J. 2006. An examination of the seductive details effect in terms of working memory capacity. Memory and Cognition 34, 2, 344--355.Google ScholarGoogle ScholarCross RefCross Ref
  52. Schmeidler, E., and Kirchner, C. 2001. Adding Audio Description: Does It Make a Difference? Journal of Visual Impairment and Blindness, 198--212.Google ScholarGoogle ScholarCross RefCross Ref
  53. Szarkowska, A. 2011. Text-to-Speech Audio Description: Towards Wider Availability of AD. Journal of Specialised Translation, 15, 142--162.Google ScholarGoogle Scholar
  54. Udo, J. P., and Fels, D. I. 2010. Universal Design on Stage: Live Audio Description for Theatrical Performances. Perspectives 18, 3, 189--203.Google ScholarGoogle Scholar
  55. Van Gog, T., Paas, F., van Merrinboer, J. J. G., and Witte, P. 2005. Uncovering the problem-solving process: Cued retrospective reporting versus concurrent and retrospective reporting. Journal of Experimental Psychology: Applied 11, 4, 237--244.Google ScholarGoogle ScholarCross RefCross Ref
  56. Yantis, S., and Egeth, H. E. 1999. On the distinction between visual salience and stimulus-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance 25, 661--676.Google ScholarGoogle ScholarCross RefCross Ref
  57. Yantis, S. 1998. Attention. Psychology Press, ch. Control of Visual Attention, 223--256.Google ScholarGoogle Scholar

Index Terms

  1. Multimodal learning with audio description: an eye tracking study of children's gaze during a visual recognition task

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SAP '12: Proceedings of the ACM Symposium on Applied Perception
      August 2012
      131 pages
      ISBN:9781450314312
      DOI:10.1145/2338676

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 August 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      SAP '12 Paper Acceptance Rate21of40submissions,53%Overall Acceptance Rate43of94submissions,46%

      Upcoming Conference

      SAP '24
      ACM Symposium on Applied Perception 2024
      August 30 - 31, 2024
      Dublin , Ireland

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader