skip to main content
10.1145/2502081.2502137acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
poster

How do we deep-link?: leveraging user-contributed time-links for non-linear video access

Authors Info & Claims
Published:21 October 2013Publication History

ABSTRACT

This paper studies a new way of accessing videos in a non-linear fashion. Existing non-linear access methods allow users to jump into videos at points that depict specific visual concepts or that are likely to elicit affective reactions. We believe that deep-link comments, which occur unprompted on social video sharing platforms, offer a new opportunity beyond existing methods. With deep-link comments, viewers express themselves about a particular moment in a video by including a time-code. Deep-link comments are special because they reflect viewer perceptions of noteworthiness, that include, but extend beyond depicted conceptual content and induced affective reactions. Based on deep-link comments collected from YouTube, we develop a Viewer Expressive Reaction Variety (VERV) taxonomy that captures how viewers deep-link. We validate the taxonomy with a user study on a crowdsourcing platform and discuss how it extends conventional relevance criteria. We carry out experiments which show that deep-link comments can be automatically filtered and sorted into VERV categories.

References

  1. C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1--27:27, May 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. K. Chorianopoulos, I. Leftheriotis, and C. Gkonela. Socialskip: pragmatic understanding within web video. In EuroITV'11, pages 25--28. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. G. Friedland, L. Gottlieb, and A. Janin. Using artistic markers and speaker identification for narrative-theme navigation of seinfeld episodes. In IEEE ISM'09, pages 511--516, December 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Hanjalic, R. Lagendijk, and J. Biemond. Automated high-level movie segmentation for advanced video-retrieval systems. IEEE Trans. Circuits Syst. Video Technol., 9(4):580--588, June 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Hanjalic and L.-Q. Xu. Affective video content representation and modeling. IEEE Trans. Multimedia, 7(1):143--154, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C.-F. Hsu, E. Khabiri, and J. Caverlee. Ranking comments on the social web. In IEEE CSE'09, volume 4, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Li, M. Wang, and X.-S. Hua. MSRA-MM 2.0: A large-scale web multimedia dataset. In IEEE ICDMW'09, pages 164--169, December 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Madden, I. Ruthven, and D. McMenemy. A classification scheme for content analyses of YouTube video comments. J Doc, 2013. In press.Google ScholarGoogle Scholar
  9. M. Potthast, B. Stein, F. Loose, and S. Becker. Information retrieval in the commentsphere. ACM Trans Intell Syst Technol, 3(4):68:1--68:21, Sept. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. G. Rugg and P. McGeorge. The sorting techniques: a tutorial paper on card sorts, picture sorts and item sorts. Expert Syst, 14(2):80--93, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  11. E. Smits and A. Hanjalic. A system concept for socially enriched access to soccer video collections. IEEE Multimedia, 17:26--35, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. G. M. Snoek and M. Worring. Concept-based video retrieval. Found Trends Inf Retr, 2(4):215--322, Apr. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. Vliegendhart, M. Larson, and A. Hanjalic. LikeLines: collecting timecode-level feedback for web videos through user interactions. In ACM MM'12. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Vliegendhart, M. Larson, and J. Pouwelse. Discovering user perceptions of semantic similarity in near-duplicate multimedia files. In CrowdSearch 2012, pages 54--58, 2012.Google ScholarGoogle Scholar
  15. S. Wakamiya, D. Kitayama, and K. Sumiya. Scene extraction system for video clips using attached comment interval and pointing region. Multimed Tools Appl, 54(1):7--25, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. How do we deep-link?: leveraging user-contributed time-links for non-linear video access

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '13: Proceedings of the 21st ACM international conference on Multimedia
      October 2013
      1166 pages
      ISBN:9781450324045
      DOI:10.1145/2502081

      Copyright © 2013 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 October 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • poster

      Acceptance Rates

      MM '13 Paper Acceptance Rate47of235submissions,20%Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader