Skip to main content

What Is the Role of Similarity for Known-Item Search at Video Browser Showdown?

  • Conference paper
  • First Online:
Similarity Search and Applications (SISAP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11223))

Included in the following conference series:

  • 593 Accesses

Abstract

Across many domains, machine learning approaches start to compete with human experts in tasks originally considered as very difficult for automation. However, effective retrieval of general video shots still represents an issue due to their variability, complexity and insufficiency of training sets. In addition, users can face problems trying to formulate their search intents in a given query interface. Hence, many systems still rely also on interactive human-machine cooperation to boost effectiveness of the retrieval process. In this paper, we present our experience with known-item search tasks in the Video Browser Showdown competition, where participating interactive video retrieval systems mostly rely on various similarity models. We discuss the observed difficulty of known-item search tasks, categorize employed interaction components (relying on similarity models) and inspect successful interactive known-item searches from the recent iteration of the competition. Finally, open similarity search challenges for known-item search in video are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    www.videobrowsershowdown.org.

  2. 2.

    The tasks at VBS 2018 were organized into three sessions – expert users (ID 1–12), novice users (ID 13–20) and a test session for the experts organized one day before the competition (ID 21–30).

References

  1. Awad, G., et al.: TRECVID 2017: evaluating ad-hoc and instance video search, events detection, video captioning and hyperlinking. In: Proceedings of TRECVID 2017. NIST, Gaithersburg (2017)

    Google Scholar 

  2. Bailer, W., Thallinger, G.: A framework for multimedia content abstraction and its application to rushes exploration. In: Proceedings of the 6th ACM International Conference on Image and Video Retrieval, pp. 146–153 (2007)

    Google Scholar 

  3. Barthel, K.U., Hezel, N., Jung, K.: Fusing keyword search and visual exploration for untagged videos. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 413–418. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_43

    Chapter  Google Scholar 

  4. Chávez, E., Navarro, G., Baeza-Yates, R.A., Marroquín, J.L.: Searching in metric spaces. ACM Comput. Surv. 33(3), 273–321 (2001)

    Article  Google Scholar 

  5. Cobârzan, C., et al.: Interactive video search tools: a detailed analysis of the video browser showdown 2015. Multimed. Tools Appl. 76(4), 5539–5571 (2017)

    Article  Google Scholar 

  6. Gurrin, C., Smeaton, A.F., Doherty, A.R.: LifeLogging: personal big data. Found. Trends Inf. Retr. 8(1), 1–125 (2014)

    Article  Google Scholar 

  7. Keim, D.A.: Information visualization and visual data mining. IEEE Trans. Vis. Comput. Graph. 8(1), 1–8 (2002)

    Article  MathSciNet  Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  9. Lokoc, J., Bailer, W., Schoeffmann, K., Muenzer, B., Awad, G.: On influential trends in interactive video retrieval: video browser showdown 2015–2017. IEEE Trans. Multimed., 1 (2018). https://doi.org/10.1109/TMM.2018.2830110

  10. Lokoč, J., Kovalčík, G., Souček, T.: Revisiting SIRET video retrieval tool. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 419–424. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_44

    Chapter  Google Scholar 

  11. Moumtzidou, A., et al.: VERGE in VBS 2018. In: Schoeffmann, K. (ed.) MMM 2018. LNCS, vol. 10705, pp. 444–450. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_48

    Chapter  Google Scholar 

  12. Nguyen, P.A., Lu, Y.J., Zhang, H., Ngo, C.W.: Enhanced VIREO KIS at VBS 2018. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 407–412. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_42

    Chapter  Google Scholar 

  13. Primus, M.J., Münzer, B., Leibetseder, A., Schoeffmann, K.: The ITEC collaborative video search system at the video browser showdown 2018. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 438–443. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_47

    Chapter  Google Scholar 

  14. Rossetto, L., Giangreco, I., Gasser, R., Schuldt, H.: Competitive video retrieval with vitrivr. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 403–406. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_41

    Chapter  Google Scholar 

  15. Rui, Y., Huang, T.: A unified framework for video browsing and retrieval. In: Image and Video Processing Handbook, pp. 705–715 (2000)

    Google Scholar 

  16. Rujikietgumjorn, S., Watcharapinchai, N., Marukatat, S.: Sloth search system. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 431–437. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_46

    Chapter  Google Scholar 

  17. Schoeffmann, K.: A user-centric media retrieval competition: the video browser showdown 2012–2014. IEEE MultiMed. 21(4), 8–13 (2014)

    Article  Google Scholar 

  18. Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June 2015, pp. 1–9 (2015)

    Google Scholar 

  19. Truong, T.D., et al.: Video search based on semantic extraction and locally regional object proposal. In: Schoeffmann, K., et al. (eds.) MMM 2018. LNCS, vol. 10705, pp. 451–456. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73600-6_49

    Chapter  Google Scholar 

  20. Zezula, P., Amato, G., Dohnal, V., Batko, M.: Similarity Search - The Metric Space Approach. Advances in Database Systems, vol. 32. Springer, Boston (2006). https://doi.org/10.1007/0-387-29151-2

    Book  MATH  Google Scholar 

Download references

Acknowledgments

This paper has been supported by Czech Science Foundation (GAČR) project no. 17-22224S and by grant SVV-260451. This work is also supported by Universität Klagenfurt and Lakeside Labs GmbH, Klagenfurt, Austria and funding from the European Regional Development Fund and the Carinthian Economic Promotion Fund (KWF) under grant KWF 20214 u. 3520/26336/38165.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Werner Bailer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lokoč, J., Bailer, W., Schöffmann, K. (2018). What Is the Role of Similarity for Known-Item Search at Video Browser Showdown?. In: Marchand-Maillet, S., Silva, Y., Chávez, E. (eds) Similarity Search and Applications. SISAP 2018. Lecture Notes in Computer Science(), vol 11223. Springer, Cham. https://doi.org/10.1007/978-3-030-02224-2_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02224-2_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02223-5

  • Online ISBN: 978-3-030-02224-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics