Abstract
Many XR productions require reconstructions of landmarks such as buildings or public spaces. Shooting content on demand is often not feasible, thus tapping into audiovisual archives for images and videos as input for reconstruction is a promising way. However, if annotated at all, videos in (broadcast) archives are annotated on item level, so that it is not known which frames contain the landmark of interest. We propose an approach to mine frames containing relevant content in order to train a fine-grained classifier that can then be applied to unlabeled data. To ensure the reproducibility of our results, we construct a weakly labelled video landmark dataset (WAVL) based on Google Landmarks v2. We show that our approach outperforms a state-of-the-art landmark recognition method in this weakly labeled input data setting on two large datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Boiarov, A., Tyantov, E.: Large scale landmark recognition via deep metric learning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 169–178 (2019)
Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: 2010 20th International Conference on Pattern Recognition, pp. 3121–3124. IEEE (2010)
Caimotti, E., Montagnuolo, M., Messina, A.: An efficient visual search engine for cultural broadcast archives. In: AI* CH@ AI* IA, pp. 1–8 (2017)
Chen, X., Gupta, A.: Webly supervised learning of convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2015
Duan, L.Y., et al.: Compact descriptors for video analysis: the emerging mpeg standard. IEEE Multimedia 26(2), 44–54 (2019). https://doi.org/10.1109/MMUL.2018.2873844
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981). https://doi.org/10.1145/358669.358692
Gösgens, M., Zhiyanov, A., Tikhonov, A., Prokhorenkova, L.: Good classification measures and how to find them. Adv. Neural. Inf. Process. Syst. 34, 17136–17147 (2021)
Li, K., et al.: Learning from weakly-labeled web videos via exploring sub-concepts. CoRR abs/2101.03713 (2021). https://arxiv.org/abs/2101.03713
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)
Mezuman, E., Weiss, Y.: Learning about canonical views from internet image collections. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24
Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Large-scale image retrieval with attentive deep local features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3456–3465 (2017)
Perd’och, M., Chum, O., Matas, J.: Efficient representation of local geometry for large scale object retrieval. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 9–16 (2009). https://doi.org/10.1109/CVPR.2009.5206529
Perronnin, F., Liu, Y., Renders, J.M.: A family of contextual measures of similarity between distributions with application to image retrieval. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2358–2365. IEEE (2009)
Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2007)
Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: improving particular object retrieval in large scale image databases. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)
Razali, M.N.B., Tony, E.O.N., Ibrahim, A.A.A., Hanapi, R., Iswandono, Z.: Landmark recognition model for smart tourism using lightweight deep learning and linear discriminant analysis. Int. J. Adv. Comput. Sci. Appl. (2023). https://api.semanticscholar.org/CorpusID:257386803
Rossetto, L., Schuldt, H., Awad, G., Butt, A.A.: V3C – a research video collection. In: Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, W.-H., Vrochidis, S. (eds.) MMM 2019. LNCS, vol. 11295, pp. 349–360. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05710-7_29
Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. 34, 8135–8153 (2022)
Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)
Torii, A., Arandjelović, R., Sivic, J., Okutomi, M., Pajdla, T.: 24/7 place recognition by view synthesis. In: CVPR (2015)
Torii, A., Sivic, J., Pajdla, T., Okutomi, M.: Visual place recognition with repetitive structures. In: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2013, vol. 37, pp. 883–890. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2013). https://doi.org/10.1109/CVPR.2013.119
Weyand, T., Araujo, A., Cao, B., Sim, J.: Google landmarks dataset v2-a large-scale benchmark for instance-level recognition and retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2575–2584 (2020)
Yang, M., Cui, C., Xue, X., Ren, H., Wei, K.: 2nd place solution to google landmark retrieval 2020 (2022)
Yokoo, S., Ozaki, K., Simo-Serra, E., Iizuka, S.: Two-stage discriminative re-ranking for large-scale landmark retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1012–1013 (2020)
Yu, F.: BDD100K: a large-scale diverse driving video database. BAIR (Berkeley Artificial Intelligence Research) (2018). https://bair.berkeley.edu/blog/2018/05/30/bdd
Zhuang, P., Wang, Y., Qiao, Y.: Learning attentive pairwise interaction for fine-grained classification. CoRR abs/2002.10191 (2020). https://arxiv.org/abs/2002.10191
Acknowledgements
The authors would like to thank Stefanie Onsori-Wechtitsch for providing the segmenter implementation.
The research leading to these results has been funded partially by the European Union’s Horizon 2020 research and innovation programme, under grant agreement n\(^\circ \) 951911 AI4Media (https://ai4media.eu), and under Horizon Europe under grant agreement n\(^\circ \) 101070250 XRECO (https://xreco.eu/).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Neuschmied, H., Bailer, W. (2024). Mining Landmark Images for Scene Reconstruction from Weakly Annotated Video Collections. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-53302-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53301-3
Online ISBN: 978-3-031-53302-0
eBook Packages: Computer ScienceComputer Science (R0)