skip to main content
10.1145/3546000.3546026acmotherconferencesArticle/Chapter ViewAbstractPublication Pageshp3cConference Proceedingsconference-collections
research-article

Searching Top-K Similar Moving Videos

Authors Info & Claims
Published:19 August 2022Publication History

ABSTRACT

The application of sensors enables mobile devices to generate amounts of content-aware data, such as trajectory, gyro, and video data. Moving video is an emerging new type of moving object that can provide a potential source for geo-referenced applications. Measuring the similarity of moving videos is widely used in traffic management, tourist recommendations, and location-based advertising. Our prior work proposed two similarity measures, the Largest Common View Subsequences can accurately calculate similar moving videos, and the View Vector Subsequences can fast calculate similar moving videos. In this paper, we proposed the searching top-k similar moving videos (K-SSMV). First, we give the problem definition of searching top-k similar moving videos. Then, we illustrated the strategy of searching for moving videos. Specifically, we sorted the video pairs by the most similar videos. Since the similar videos would be less than k, we first calculated the similarity of moving videos by the Largest Common View Subsequences algorithm, if the number of similar videos is less than k, we will adopt the View Vector Subsequences algorithm to compute the similarity of moving videos. Next, we mixed the candidate videos calculated by the Largest Common View Subsequences and the View Vector Subsequences algorithms, the searching top-k similar moving videos algorithm picked the top-k similar videos from candidate videos. Finally, we evaluated the performance of our proposed algorithms on the accuracy and computational cost. The experiments verified that our algorithms can efficiently search top-k similar moving videos.

References

  1. F. M. Idris and S. Panchanathan, "Spatio-temporal indexing of vector quantized video sequences," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, no. 5, pp. 728-740, Oct. 1997, doi: 10.1109/76.633489.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Esteban Zimányi, Mahmoud Sakr, and Arthur Lesuisse. 2020. MobilityDB: A Mobility Database Based on PostgreSQL and PostGIS. ACM Trans. Database Syst. 45, 4, Article 19 (December 2020), 42 pages. DOI:https://doi.org/10.1145/3406534Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. W. Ding, K. Yang and K. W. Nam, "Measuring similarity between geo-tagged videos using largest common view", Electron. Lett., vol. 55, no. 8, pp. 450-452, Apr. 2019.Google ScholarGoogle Scholar
  4. W. Ding, J. Tian, Y. Lee, K. Yang and K. W. Nam, "VVS: Fast Similarity Measuring of FoV-Tagged Videos," in IEEE Access, vol. 8, pp. 190734-190745, 2020, doi: 10.1109/ACCESS.2020.3031485.Google ScholarGoogle Scholar
  5. Gavrila, D. M. (1994). R-tree index optimization. University of Maryland, Center for Automation Research, Computer Vision LaboratoryGoogle ScholarGoogle Scholar
  6. Coleman, P., Ioffe, L. B., & Tsvelik, A. M. (1995). Simple formulation of the two-channel Kondo model. Physical Review B, 52(9), 6611.Google ScholarGoogle ScholarCross RefCross Ref
  7. Zimányi, E., Sakr, M., & Lesuisse, A. (2020). MobilityDB: A mobility database based on PostgreSQL and PostGIS. ACM Transactions on Database Systems (TODS), 45(4), 1-42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ay, S. A., Zimmermann, R., & Kim, S. H. (2008, October). Viewable scene modeling for geospatial video search. In Proceedings of the 16th ACM international conference on Multimedia (pp. 309-318)Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Liu, X., Yang, X., Wang, M., & Hong, R. (2020). Deep neighborhood component analysis for visual similarity modeling. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3), 1-15Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Kim, S. H., Ay, S. A., & Zimmermann, R. (2010). Design and implementation of geo-tagged video search framework. Journal of Visual Communication and Image Representation, 21(8), 773-786Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. F. Yu, Berkeley Deepdrive Data, May 2020, [online] Available: https://bdd-data.berkeley.edu.Google ScholarGoogle Scholar

Index Terms

  1. Searching Top-K Similar Moving Videos

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      HP3C '22: Proceedings of the 6th International Conference on High Performance Compilation, Computing and Communications
      June 2022
      221 pages
      ISBN:9781450396295
      DOI:10.1145/3546000

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 August 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)10
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format