skip to main content
10.1145/3123024.3129274acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
extended-abstract

Quantified reading and learning for sharing experiences

Published:11 September 2017Publication History

ABSTRACT

This paper presents two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision.

References

  1. Olivier Augereau, Hiroki Fujiyoshi, and Koichi Kise. 2016. Towards an Automated Estimation of English Skill via TOEIC Score Based on Reading Analysis. In Proc. International Conference on Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  2. Olivier Augereau, Koichi Kise, and Kensuke Hoshika. 2015. A Proposal of a Document Image Reading-life Log Based on Document Image Retrieval and Eyetracking. In Proc. International Conf. on Document Analysis and Recognition (ICDAR2015). 23--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Georg Buscher and Andreas Dengel. 2009. Gaze-based filtering of relevant document segments. In Proc. International World Wide Web Conference (WWW). 20--24.Google ScholarGoogle Scholar
  4. Shoya Ishimaru, Kai Kunze, Koichi Kise, and Andreas Dengel. 2016. The Wordometer 2.0 - Estimating the Number of Words You Read in Real Life using Commercial EOG Glasses. In Proc. UbiComp2016 Adjunct. 1217--1220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kai Kunze, Hitoshi Kawaichi, Koichi Kise, and Kazuyo Yoshimura. 2013. The Wordometer - Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking. In Proc. ICDAR2013. 25 -- 29. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Kai Kunze, Katsutoshi Masai, Masahiko Inami, Ömer Sacakli, Marcus Liwicki, Andreas Dengel, Shoya Ishimaru, and Koichi Kise. 2015. Quantifying Reading Habits - Counting How Many Words You Read. In Proc. UbiComp2015. 87--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Kai Kunze, Yuki Shiga, Shoya Ishimaru, and Koichi Kise. 2013a. Reading Activity Recognition Using an Off-the-Shelf EEG --- Detecting Reading Activities and Distinguishing Genres of Documents. In Proc. International Conf. on Document Analysis and Recognition (ICDAR2013). 96--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Kai Kunze, Yuzuko Utsumi, Yuki Shiga, Koichi Kise, and Andreas Bulling. 2013b. I know what you are reading: recognition of document types using mobile eye tracking. In Proc. International Symposium on Wearable Comuters. 113--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological bulleti 124, 3 (1998), 372--422.Google ScholarGoogle Scholar
  10. Keith Rayner, Alexander Pollatsek, Jane Ashby, and Calrles Clifton Jr. 2012. Psychology of Reading (2nd ed.). Psychology Press.Google ScholarGoogle Scholar
  11. Charles Lima Sanches, Olivier Augereau, and Koichi Kise. 2016. Vertical Error Correction of Eye Trackers in Nonrestrictive Reading Condition. IPSJ Trans. on Computer Vision and Applications 8, 7 (2016).Google ScholarGoogle Scholar
  12. Yuki Shiga, Yuzuko Utsumi, Masakazu Iwamura, Kai Kunze, and Koichi Kise. 2016. Automatic Document Type Classification Using Eye Movements and Egocentric Images. Trans. IEICE (D) J99-D, 9 (2016), 950--958.Google ScholarGoogle Scholar
  13. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going Deeper with Convolutions. In Proc. CVPR 2015. IEEE.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Quantified reading and learning for sharing experiences

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader