Skip to main content

Retrieving System of Presentation Contents Based on User’s Operations and Semantic Contexts

  • Conference paper
Database Systems for Advanced Applications (DASFAA 2010)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5982))

Included in the following conference series:

Abstract

More and more presentation contents, which consist of heterogeneous media such as videos and slides, have recently been recorded and viewed. In e-Learning, we often use presentation contents archives. However, if there is a slide that includes a keyword that the users do not know, it is difficult for them to understand the rest of the presentation contents. So, they must stop viewing it and look up the keyword. In this paper, we propose an interval-retrieving method that is based on the user’s viewing operations. By using this method, we extract the user’s interval-retrieving intention using his/her viewing operations and selected keywords. Queries are also generated for the intervals by using the user’s retrieving intention and a keyword role in the intervals. This method also enables the users to obtain intervals that help them to more efficiently understand the contents of the presentation without discontinuing viewing them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. DBSJ Archives, http://www.dbsj.org/Japanese/Archivess/archivesIndex.html

  2. Kan, M.Y.: SlideSeer: A digital library of aligned document and presentation pairs. In: Proc. of the 7th ACM/IEEE-CS joint conference on Digital libraries, pp. 81–90 (2007)

    Google Scholar 

  3. Repp, S., Linckels, S., Meinel, C.: Towards to an Automatic Semantic Annotation for Multimedia Learning Objects. In: Proc. of the international workshop on Educational multimedia and multimedia education, pp. 19–26 (2007)

    Google Scholar 

  4. Ricoh Corporation: MPmeister, http://www.ricoh.co.jp/mpmeister/

  5. Kitayama, D., Otani, A., Sumiya, K.: A Scene Extracting Method based on Structural and Semantic Analysis of Presentation Content Archives. In: Proc. of The Seventh International Conference on Creating, Connecting and Collaborating through Computing, C5 2009 (2009)

    Google Scholar 

  6. Asahara, M., Matsumoto, Y.: Extended Models and Tools for High-performance Part-of-Speech Tagger. In: Proc. of The 18th International Conference on Computational Linguistics (COLING 2000), pp. 21–27 (2000)

    Google Scholar 

  7. SlothLib, http://www.dl.kuis.kyoto-u.ac.jp/SlothLibWiki/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kitayama, D., Sumiya, K. (2010). Retrieving System of Presentation Contents Based on User’s Operations and Semantic Contexts. In: Kitagawa, H., Ishikawa, Y., Li, Q., Watanabe, C. (eds) Database Systems for Advanced Applications. DASFAA 2010. Lecture Notes in Computer Science, vol 5982. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12098-5_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-12098-5_50

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-12097-8

  • Online ISBN: 978-3-642-12098-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics