Skip to main content
Log in

Prediction of recall accuracy in contextual understanding tasks using features of oculo-motors

  • Long paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Contextual understanding, which consists of memory, reasoning, and recall, is a key process of human–computer interactions and interfaces which provide universal accessibility. To determine the possibility of predicting the recall accuracy of reading and memorizing tasks using features of oculo-motors, eye movements, and pupillary changes, a contextual understanding task experiment, which was composed of definition and question statements, was conducted. The existence of a relationship between the features of oculo-motors during memorization and recall performance was hypothesized. The features of viewer’s oculo-motors while reading and judging a statement were observed, and significant differences were identified in most features according to the correctness of responses. The possibility of predicting the verification accuracy of question statements using these features for definition statements and the support vector regression technique was established, and there was a significant correlational relationship between the predicted accuracy and the experimental accuracy. Also, the estimation performance for judging question statements was assessed using the features and support vector machines. Estimation accuracy was the highest when all extracted features of eye movements were used. The number of definition statements, which were experimental conditions used to control the task difficulty, affected the accuracy of both prediction and estimation. These results provide evidence that features of oculo-motors during the reading, memorizing, and recalling of statements can be used as an index of contextual understanding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Apple: Ratings and reviews (2012). Available 19 May 2012 at URL: http://store.apple.com/us/product/TV65LL/A?fnode=MTY1NDA0M

  2. Arakawa, H., Nakayama, M., Shimizu, Y.: A relation of EEG and pupil size in the memorizing numeral tasks. IEICE Trans. J82-D-I(9), 1217–1221 (1999)

    Google Scholar 

  3. Beatty, J.: Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 91(2), 276–292 (1982)

    Article  Google Scholar 

  4. Bednarik, R., Vrzakova, H., Hradis, M.: What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In: Mulligan, J.B., Qvarfordt, P. (eds.) Proceedings of ETRA 2012: ACM Symposium on Eye-Tracking Research and Applications, pp. 83–90. ACM, New York, USA (2012)

  5. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer Science + Business Media, New York (2006)

    MATH  Google Scholar 

  6. Chang, C., Lin, C.: Libsvm: a library for support vector machines (last updated: May 13, 2008) (2008). Available 21 July 2009 at URL: http://www.csie.ntu.edu.tw/~cjlin/libsvm

  7. Chapman, P.P., Underwood, G.: Visual search of dynamic scenes: EVENT types and the role of experience in viewing driving situations. In: Underwood, G. (ed.) Eye Guidance in Reading and Scene Perception. Elsevier, Oxford (1998)

    Google Scholar 

  8. Christianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000)

    Book  Google Scholar 

  9. Coolican, H.: Research Methods and Statistics in Psychology. Hodder and Stoughton, London (1994)

    Google Scholar 

  10. Duchowski, A.T.: High-level eye movement metrics in the usability context (2006). Position paper, CHI2006 workshop: getting a measure of satisfaction from eyetracking in practice

  11. Ebisawa, Y., Sugiura, M.: Influences of target and fixation point conditions on characteristics of visually guided voluntary saccade. J. Inst. Image Inf. Telev. Eng. 52(11), 1730–1737 (1998)

    Google Scholar 

  12. Ehmke, C., Wilson, S.: Identifying web usability problems from eye-tracking. In: Ball, L., Sasse, M., Sas, C., Ormerod, T., Dix, A., Bagnall, P., McEwan, T. (eds.) Proceedings of HCI 2007. British Computer Society (2007)

  13. Eivazi, S., Bednarik, R.: Inferring problem solving strategies using eye-tracking: system description and evaluation. In: Schulte, C., Suhonen, J. (eds.) Proceedings of the 10th Koli Calling International Conference on Computing Education Research, pp. 55–61. ACM, New York, USA (2010)

  14. Fawcett, T.: An introduction to roc analysis. Pattern Recognit. Lett. 27, 861–874 (2006)

    Article  Google Scholar 

  15. Goldberg, J.H., Schryver, J.C.: Eye-gaze determination of user intent at the computer interface. In: Findlay, J., Walker, R., Kentridge, R. (eds.) Eye Movement Research, Volume 6: Mechanisms, Processes and Applications (Studies in Visual Information Processing), pp. 491–502. Elsevier (1995)

  16. Hess, E.H., Polt, J.M.: Pupil size in relation to mental activity during simple problem solving. Science 143(3611), 1190–1192 (1964)

    Article  Google Scholar 

  17. Hofacker, C.F., Murphy, J.: World wide web banner advertisement copy testing. Eur. J. Mark. 32(7/8), 703–712 (1998)

    Article  Google Scholar 

  18. Nac Image Technology: Eye Mark Recorder 8 (2007). Available 4 July 2007 at URL: http://www.eyemark.jp

  19. Iqbal, S.T., Zheng, X.S., Bailey, B.P.: Task-evoked pupillary response to mental workload in human-computer interaction. In: Dykstra-Erickson, E., Tscheligi, M. (eds.) CHI’ 04 Extended Abstracts on Human factors in Computing Systems, pp. 1477–1480. ACM Press, New York (2004)

    Google Scholar 

  20. Jacob, R.J.K., Karn, K.S.: Eye tracking in human–computer interaction and usability research: ready to deliver the promises. In: Hyona, Radach, Deubel (eds.) The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research. Elsevier Science BV, Oxford (2003)

  21. Just, M.A., Carpenter, P.A.: Eye fixations and cognitive processes. Cogn. Psychol. 8, 441–480 (1976)

    Article  Google Scholar 

  22. Kinnunen, T., Sedlak, F., Bednarik, R.: Towards task-independent person authentication using eye movement signals. In: Hyrskykari, A., Ji, Q. (eds.) Proceedings of ETRA 2010: ACM Symposium on Eye-Tracking Research and Applications, pp. 187–190. ACM, New York (2010)

    Chapter  Google Scholar 

  23. Klinger, J., Kumar, R., Hanrahan, P.: Measuring the task-evoked pupillary response with a remote eye tracker. In: Amir, A., Morimoto, C. (eds.) Eye Tracking Research and Applications Symposium 2008, pp. 69–72. ACM, ACM Press, New York, USA (2008)

  24. Liu, A.: What the driver’s eye tells the car’s brain. In: Underwood, G. (ed.) Eye Guidance in Reading and Scene Perception, pp. 431–452. Elsevier (1998)

  25. Liversedge, S.P., Underwood, G.: Foveal processing load and landing position effects in reading. In: Underwood, G. (ed.) Eye Guidance in Reading and Scene Perception. Elsevier, Oxford (1998)

    Google Scholar 

  26. Lüdtke, H., Wilhelm, B., Adler, M., Schaeffel, F., Wilhelm, H.: Mathematical procedures in data recording and processing of pupillary fatigue waves. Vis. Res. 38, 2889–2896 (1998)

    Article  Google Scholar 

  27. Nakabayashi, K., Nakamura, A., Yoshida, T., Sagara, T., Kagata, S., Nagaoka, K.: An online testing system compliant with the qti specification. Japan J. Educ. Technol. 29(3), 299–307 (2005)

    Google Scholar 

  28. Nakamichi, N., Shima, K., Sakai, M., Matsumoto, K.: Detecting low usability web pages using quantitative data of users’ behavior. In: Proceedings of the 28th International Conference on Software Engineering (ICSE’06). ACM Press (2006)

  29. Nakayama, M.: Estimation of eye-pupil size during blink by support vector regression. In: Modelling Natural Action and Selection: Proceedings of an International workshop, pp. 121–126. Edinburgh, UK (2005)

  30. Nakayama, M., Hayashi, Y.: Feasibility study for the use of eye movements in estimation of answer correctness. In: Villanueva, A., Hansen, J.P., Ersboell, B.K. (eds.) Proceedings of COGAIN2009, pp. 71–75 (2009)

  31. Nakayama, M., Hayashi, Y.: Estimation of viewer’s response for contextual understanding of tasks using features of eye movements. In: Spencer, S.N. (ed.) Eye Tracking Research and Applications Symposium 2010, pp. 53–56. ACM, ACM Press, New York (2010)

  32. Nakayama, M., Hayashi, Y.: Feasibility of prediction for recall accuracy in contextual understanding task using features of eye movements. IEICE Technical Report 110(33), 109–114 (2010)

  33. Nakayama, M., Katsukura, M.: Development of a system usability assessment procedure using oculo-motors for input operation. Univ. Access Inf. Soc. 10(1), 51–68 (2011)

    Article  Google Scholar 

  34. Nakayama, M., Nowak, W., Ishikawa, H., Asakawa, K.: An assessment procedure involving waveform shapes for pupil light reflex. In: Fred, A., Filipe, J., Gamboa, H. (eds.) Proceedings of BIOSIGNALS 2010, pp. 322–326. INSTICC (2010)

  35. Nakayama, M., Shimizu, Y.: Frequency analysis of task evoked pupillary response and eye-movement. In: Spencer, S.N. (ed.) Eye Tracking Research and Applications Symposium 2002, pp. 71–76. ACM, ACM Press, New York (2004)

  36. Nakayama, M., Takahasi, Y.: An estimation of certainty for multiple choice responses using eye-movement. In: Electronic Proceedings of the 2nd Conference on Communication by Gaze Interaction—COGAIN 2006: Gazing into the Future, pp. 67–72. COGAIN (2006)

  37. Nakayama, M., Takahasi, Y.: An estimation of response certainty using features of eye movements. In: Proceedings of 15th European Symposium on Artificial Neural Networks, Advances in Computational Intelligence and Learning (ESANN), pp. 91–96 (2007)

  38. Nakayama, M., Takahasi, Y.: Estimation of certainty for responses to multiple-choice questionnaires using eye movements. ACM TOMCCAP. 5(2), Article 14 (2008)

    Google Scholar 

  39. Palmer, S.E.: Vision Science—Photons to Phenomenology. The MIT Press, Cambridge (1999)

    Google Scholar 

  40. Pinkowski, B.: Robust Fourier descriptions for characterizing amplitude-modulated waveform shapes. J. Acoust. Soc. Am. 95(6), 3419–3423 (1994)

    Article  Google Scholar 

  41. Puolamäki, K., Salojärvi, J., Savia, E., Simola, J., Kaski, S.: Combining eye movements and collaborative filtering for proactive information retrieval. In: Heikkil, A., Pietik, A., Silven, O. (eds.) Proceedings of ACM-SIGIR 2005, pp. 145–153. ACM, ACM Press, New York (2005)

  42. Rayner, K.: Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 124(3), 372–422 (1998)

    Article  Google Scholar 

  43. Shimodaira, H., Noma, K., Nakai, M., Sagayama, S.: Dynamic time-alignment kernel in support vector machine. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14, vol. 2, pp. 921–928. The MIT Press, Cambridge (2001)

    Google Scholar 

  44. Shinotsuka, H.: Confidence and accuracy of mental judgments. Jpn. J. Psychol. 63(6), 396–403 (1993)

    Article  Google Scholar 

  45. Smola, A.J., Schölkopf, Y.: A tutorial on support vector regression. NeuroCOLT2 Technical Report Series NC2-TR-1998-030, Royal Holloway College, University of London (1998)

  46. Stephanidis, C., Savidis, A.: Universal access in the information society: methods, tools, and interaction technologies. Univ. Access Inf. Soc. 1, 40–50 (2001)

    Google Scholar 

  47. Stork, D.G.R., Duda, O., Hart, P.E.: Pattern Classification, 2nd edn. John Wiley & Sons, Inc. (2001). Japanese translation by M. Onoue, New Technology Communications Co., Ltd., Tokyo, Japan (2001)

  48. Takita, W., Nakayama, M.: Influence of audio-visual presentation style for sentences on their understanding. IEICE Tech. Rep. 104(132), 17–22 (2004)

    Google Scholar 

  49. Tatler, B.W.: The central fixation bias in scene viewing: selecting and optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7(14), 1–17 (2007)

    Article  Google Scholar 

  50. Tatler, B.W., Wade, N.J., Kaulard, K.: Examining art: dissociating pattern and perceptual influences on oculomotor behavior. Spatial Vis. 21(1–2), 165–184 (2007)

    Article  Google Scholar 

  51. The MathWorks Inc.: Signal Processing Toolbox for use with MATLAB User’s Guide, Version 6. The MathWorks Inc., Natick, MA, USA (2002)

  52. Underwood, G.: Eye fixations on pictures of natural scenes: getting the gist and identifying the components. In: Underwood, G. (ed.) Cognitive Processes in Eye Guidance. Oxford University Press, Oxford (2005)

    Chapter  Google Scholar 

  53. Wade, N.J., Tatler, B.W.: The Moving Tablet of the Eye. Oxford University Press, New York (2005)

    Book  Google Scholar 

  54. Zhang, D., Lu, G.: A comparative study on shape retrieval using Fourier descriptors with different shape signatures. In: Proceedings of Intelligent Multimedia and Distance Education (ICIMADE01), pp. 1–9 (2001)

  55. Zhang, Y.: An Item Response Model on Probability-Testing (in Japanese). University of Tokyo Press, Tokyo (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minoru Nakayama.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nakayama, M., Hayashi, Y. Prediction of recall accuracy in contextual understanding tasks using features of oculo-motors. Univ Access Inf Soc 13, 175–190 (2014). https://doi.org/10.1007/s10209-013-0307-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-013-0307-2

Keywords

Navigation