Skip to main content

Mining Landmark Images for Scene Reconstruction from Weakly Annotated Video Collections

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Abstract

Many XR productions require reconstructions of landmarks such as buildings or public spaces. Shooting content on demand is often not feasible, thus tapping into audiovisual archives for images and videos as input for reconstruction is a promising way. However, if annotated at all, videos in (broadcast) archives are annotated on item level, so that it is not known which frames contain the landmark of interest. We propose an approach to mine frames containing relevant content in order to train a fine-grained classifier that can then be applied to unlabeled data. To ensure the reproducibility of our results, we construct a weakly labelled video landmark dataset (WAVL) based on Google Landmarks v2. We show that our approach outperforms a state-of-the-art landmark recognition method in this weakly labeled input data setting on two large datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://duckduckgo.com/.

  2. 2.

    https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution.

References

  1. Boiarov, A., Tyantov, E.: Large scale landmark recognition via deep metric learning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 169–178 (2019)

    Google Scholar 

  2. Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: 2010 20th International Conference on Pattern Recognition, pp. 3121–3124. IEEE (2010)

    Google Scholar 

  3. Caimotti, E., Montagnuolo, M., Messina, A.: An efficient visual search engine for cultural broadcast archives. In: AI* CH@ AI* IA, pp. 1–8 (2017)

    Google Scholar 

  4. Chen, X., Gupta, A.: Webly supervised learning of convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2015

    Google Scholar 

  5. Duan, L.Y., et al.: Compact descriptors for video analysis: the emerging mpeg standard. IEEE Multimedia 26(2), 44–54 (2019). https://doi.org/10.1109/MMUL.2018.2873844

    Article  MathSciNet  Google Scholar 

  6. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981). https://doi.org/10.1145/358669.358692

    Article  MathSciNet  Google Scholar 

  7. Gösgens, M., Zhiyanov, A., Tikhonov, A., Prokhorenkova, L.: Good classification measures and how to find them. Adv. Neural. Inf. Process. Syst. 34, 17136–17147 (2021)

    Google Scholar 

  8. Li, K., et al.: Learning from weakly-labeled web videos via exploring sub-concepts. CoRR abs/2101.03713 (2021). https://arxiv.org/abs/2101.03713

  9. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  10. Mezuman, E., Weiss, Y.: Learning about canonical views from internet image collections. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)

    Google Scholar 

  11. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  12. Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Large-scale image retrieval with attentive deep local features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3456–3465 (2017)

    Google Scholar 

  13. Perd’och, M., Chum, O., Matas, J.: Efficient representation of local geometry for large scale object retrieval. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 9–16 (2009). https://doi.org/10.1109/CVPR.2009.5206529

  14. Perronnin, F., Liu, Y., Renders, J.M.: A family of contextual measures of similarity between distributions with application to image retrieval. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2358–2365. IEEE (2009)

    Google Scholar 

  15. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2007)

    Google Scholar 

  16. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: improving particular object retrieval in large scale image databases. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)

    Google Scholar 

  17. Razali, M.N.B., Tony, E.O.N., Ibrahim, A.A.A., Hanapi, R., Iswandono, Z.: Landmark recognition model for smart tourism using lightweight deep learning and linear discriminant analysis. Int. J. Adv. Comput. Sci. Appl. (2023). https://api.semanticscholar.org/CorpusID:257386803

  18. Rossetto, L., Schuldt, H., Awad, G., Butt, A.A.: V3C – a research video collection. In: Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, W.-H., Vrochidis, S. (eds.) MMM 2019. LNCS, vol. 11295, pp. 349–360. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05710-7_29

    Chapter  Google Scholar 

  19. Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. 34, 8135–8153 (2022)

    Article  Google Scholar 

  20. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  21. Torii, A., Arandjelović, R., Sivic, J., Okutomi, M., Pajdla, T.: 24/7 place recognition by view synthesis. In: CVPR (2015)

    Google Scholar 

  22. Torii, A., Sivic, J., Pajdla, T., Okutomi, M.: Visual place recognition with repetitive structures. In: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2013, vol. 37, pp. 883–890. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2013). https://doi.org/10.1109/CVPR.2013.119

  23. Weyand, T., Araujo, A., Cao, B., Sim, J.: Google landmarks dataset v2-a large-scale benchmark for instance-level recognition and retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2575–2584 (2020)

    Google Scholar 

  24. Yang, M., Cui, C., Xue, X., Ren, H., Wei, K.: 2nd place solution to google landmark retrieval 2020 (2022)

    Google Scholar 

  25. Yokoo, S., Ozaki, K., Simo-Serra, E., Iizuka, S.: Two-stage discriminative re-ranking for large-scale landmark retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1012–1013 (2020)

    Google Scholar 

  26. Yu, F.: BDD100K: a large-scale diverse driving video database. BAIR (Berkeley Artificial Intelligence Research) (2018). https://bair.berkeley.edu/blog/2018/05/30/bdd

  27. Zhuang, P., Wang, Y., Qiao, Y.: Learning attentive pairwise interaction for fine-grained classification. CoRR abs/2002.10191 (2020). https://arxiv.org/abs/2002.10191

Download references

Acknowledgements

The authors would like to thank Stefanie Onsori-Wechtitsch for providing the segmenter implementation.

The research leading to these results has been funded partially by the European Union’s Horizon 2020 research and innovation programme, under grant agreement n\(^\circ \) 951911 AI4Media (https://ai4media.eu), and under Horizon Europe under grant agreement n\(^\circ \) 101070250 XRECO (https://xreco.eu/).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helmut Neuschmied .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Neuschmied, H., Bailer, W. (2024). Mining Landmark Images for Scene Reconstruction from Weakly Annotated Video Collections. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53302-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53301-3

  • Online ISBN: 978-3-031-53302-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics