Skip to main content

Two Video Data Sets for Tracking and Retrieval of Out of Distribution Objects

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13845))

Included in the following conference series:

  • 426 Accesses

Abstract

In this work we present two video test data sets for the novel computer vision (CV) task of out of distribution tracking (OOD tracking). Here, OOD objects are understood as objects with a semantic class outside the semantic space of an underlying image segmentation algorithm, or an instance within the semantic space which however looks decisively different from the instances contained in the training data. OOD objects occurring on video sequences should be detected on single frames as early as possible and tracked over their time of appearance as long as possible. During the time of appearance, they should be segmented as precisely as possible. We present the SOS data set containing 20 video sequences of street scenes and more than 1000 labeled frames with up to two OOD objects. We furthermore publish the synthetic CARLA-WildLife data set that consists of 26 video sequences containing up to four OOD objects on a single frame. We propose metrics to measure the success of OOD tracking and develop a baseline algorithm that efficiently tracks the OOD objects. As an application that benefits from OOD tracking, we retrieve OOD sequences from unlabeled videos of street scenes containing OOD objects.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://youtu.be/_DbV8XprDmc.

References

  1. Angus, M., Czarnecki, K., Salay, R.: Efficacy of Pixel-Level OOD Detection for Semantic Segmentation. arXiv, pp. 1–13 (2019)

    Google Scholar 

  2. Arandjelović, R., Zisserman, A.: Multiple queries for large scale specific object retrieval. In: BMVC (2012)

    Google Scholar 

  3. Badrinarayanan, V., Kendall, A., Cipolla, R.: Bayesian SegNet: model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 1–12 (2017)

    Google Scholar 

  4. Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). https://doi.org/10.1007/11744023_32

    Chapter  Google Scholar 

  5. Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: the clear mot metrics. EURASIP J. Image Video Process. (2008)

    Google Scholar 

  6. Bertasius, G., Torresani, L.: Classifying, segmenting, and tracking object instances in video with mask propagation. arXiv abs/1912.04573 (2019)

    Google Scholar 

  7. Besnier, V., Bursuc, A., Picard, D., Briot, A.: Triggering failures: out-of-distribution detection by learning from local adversarial attacks in semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15701–15710 (2021)

    Google Scholar 

  8. Bevandić, P., Krešo, I., Oršić, M., Šegvić, S.: Simultaneous semantic segmentation and outlier detection in presence of domain shift. In: Proceedings of the German Conference on Pattern Recognition (GCPR), Dortmund, Germany, pp. 33–47 (2019)

    Google Scholar 

  9. Blum, H., Sarlin, P.E., Nieto, J., Siegwart, R., Cadena, C.: Fishyscapes: a benchmark for safe semantic segmentation in autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Seoul, Korea, pp. 2403–2412 (2019)

    Google Scholar 

  10. Blum, H., Sarlin, P.E., Nieto, J., Siegwart, R., Cadena, C.: The fishyscapes benchmark: measuring blind spots in semantic segmentation. Int. J. Comput. Vision 129(11), 3119–3135 (2021)

    Article  Google Scholar 

  11. Brüggemann, D., Chan, R., Rottmann, M., Gottschalk, H., Bracke, S.: Detecting out of distribution objects in semantic segmentation of street scenes. In: The 30th European Safety and Reliability Conference (ESREL), vol. 2 (2020)

    Google Scholar 

  12. Bulatov, Y.: Notmnist dataset. Google (Books/OCR), vol. 2, Technical report (2011). http://yaroslavvb.blogspot.it/2011/09/notmnist-dataset.html

  13. Bullinger, S., Bodensteiner, C., Arens, M.: Instance flow based online multiple object tracking, pp. 785–789 (2017). https://doi.org/10.1109/ICIP.2017.8296388

  14. Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  15. Chan, R., et al.: SegmentMeIfYouCan: a benchmark for anomaly segmentation. In: Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track (2021)

    Google Scholar 

  16. Chan, R., Rottmann, M., Gottschalk, H.: Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5128–5137 (2021)

    Google Scholar 

  17. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K.P., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2018)

    Article  Google Scholar 

  18. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223 (2016)

    Google Scholar 

  19. Creusot, C., Munawar, A.: Real-time small obstacle detection on highways using compressive RBM road reconstruction. In: 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 162–167 (2015)

    Google Scholar 

  20. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  21. Devries, T., Taylor, G.W.: Learning confidence for out-of-distribution detection in neural networks. arXiv abs/1802.04865 (2018)

    Google Scholar 

  22. Di Biase, G., Blum, H., Siegwart, R., Cadena, C.: Pixel-wise anomaly detection in complex driving scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16918–16927 (2021)

    Google Scholar 

  23. Dosovitskiy, A., et al.: CARLA: an open urban driving simulator. In: Proceedings of CoRL, Mountain View, USA, pp. 1–16 (2017)

    Google Scholar 

  24. Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD (1996)

    Google Scholar 

  25. Flickner, M., et al.: Query by image and video content: the QBIC system. Computer 28(9), 23–32 (1995). https://doi.org/10.1109/2.410146

    Article  Google Scholar 

  26. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on Machine Learning. Proceedings of Machine Learning Research, New York, USA, vol. 48, pp. 1050–1059. PMLR (2016)

    Google Scholar 

  27. Games, E.: Unreal engine (2004–2022). https://www.unrealengine.com

  28. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  29. Geyer, J., et al.: A2D2: Audi autonomous driving dataset. arXiv abs/2004.06320 (2020)

    Google Scholar 

  30. Goodfellow, I.J., Bulatov, Y., Ibarz, J., Arnoud, S., Shet, V.: Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082 (2013)

  31. Grcić, M., Bevandić, P., Šegvić, S.: Dense open-set recognition with synthetic outliers generated by real nvp. arXiv preprint arXiv:2011.11094 (2020)

  32. Grcić, M., Bevandić, P., Šegvić, S.: Dense anomaly detection by robust learning on synthetic negative data. arXiv preprint arXiv:2112.12833 (2021)

  33. Guadarrama, S., et al.: Open-vocabulary object retrieval. In: Robotics: Science and Systems (2014)

    Google Scholar 

  34. Gustafsson, F.K., Danelljan, M., Schön, T.B.: Evaluating scalable Bayesian deep learning methods for robust computer vision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1289–1298. Virtual Conference (2020)

    Google Scholar 

  35. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015). http://arxiv.org/abs/1512.03385

  36. Hein, A.M.: Identification and bridging of semantic gaps in the context of multi-domain engineering. In: Proceedings 2010 Forum on Philosophy, Engineering & Technology (2010)

    Google Scholar 

  37. Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 41–50 (2019)

    Google Scholar 

  38. Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., Song, D.: Scaling out-of-distribution detection for real-world settings (2020)

    Google Scholar 

  39. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings (2017)

    Google Scholar 

  40. Hendrycks, D., Mazeika, M., Dietterich, T.: Deep anomaly detection with outlier exposure. In: Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, pp. 1–18 (2019)

    Google Scholar 

  41. Hu, R., Xu, H., Rohrbach, M., Feng, J., Saenko, K., Darrell, T.: Natural language object retrieval. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4555–4564 (2016)

    Google Scholar 

  42. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017)

    Google Scholar 

  43. Huang, X., Xu, J., Tai, Y.W., Tang, C.K.: Fast video object segmentation with temporal aggregation network and dynamic template matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  44. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: NIPS (2017)

    Google Scholar 

  45. Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset, vol. 55, no. 5 (2014). http://www.cs.toronto.edu/kriz/cifar.html

  46. Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350, 1332–1338 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  47. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NIPS (2017)

    Google Scholar 

  48. Lateef, F., Ruichek, Y.: Survey on semantic segmentation using deep learning techniques. Neurocomputing 338, 321–348 (2019)

    Article  Google Scholar 

  49. Lecun, Y.: The MNIST database of handwritten digits (2010). http://yann.lecun.com/exdb/mnist/. https://ci.nii.ac.jp/naid/10027939599/en/

  50. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Proceedings of the Conference on Neural Information Processing Systems (NIPS/NeurIPS), Montréal, QC, Canada, pp. 7167–7177 (2018)

    Google Scholar 

  51. Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  52. Lin, C.C., Hung, Y., Feris, R., He, L.: Video instance segmentation tracking with a modified VAE architecture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  53. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  54. Lis, K., Honari, S., Fua, P., Salzmann, M.: Detecting road obstacles by erasing them. arXiv preprint arXiv:2012.13633 (2020)

  55. Lis, K., Nakka, K., Fua, P., Salzmann, M.: Detecting the unexpected via image resynthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2152–2161 (2019)

    Google Scholar 

  56. LoweDavid, G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60, 91–110 (2004)

    Article  Google Scholar 

  57. Maag, K.: False negative reduction in video instance segmentation using uncertainty estimates. In: 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 1279–1286. IEEE (2021)

    Google Scholar 

  58. Maag, K., Rottmann, M., Gottschalk, H.: Time-dynamic estimates of the reliability of deep semantic segmentation networks. In: 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 502–509 (2020)

    Google Scholar 

  59. Maag, K., Rottmann, M., Varghese, S., Hueger, F., Schlicht, P., Gottschalk, H.: Improving video instance segmentation by light-weight temporal uncertainty estimates. arXiv preprint arXiv:2012.07504 (2020)

  60. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google Scholar 

  61. Maji, S., Bose, S.: CBIR using features derived by deep learning. ACM/IMS Trans. Data Sci. (TDS) 2, 1–24 (2021)

    Article  Google Scholar 

  62. Mao, J., Huang, J., Toshev, A., Camburu, O.M., Yuille, A.L., Murphy, K.P.: Generation and comprehension of unambiguous object descriptions. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11–20 (2016)

    Google Scholar 

  63. Meinke, A., Hein, M.: Towards neural networks that provably know when they don’t know. In: Proceedings of the International Conference on Learning Representations (ICLR), pp. 1–18. Virtual Conference (2020)

    Google Scholar 

  64. Milan, A., Leal-Taixé, L., Reid, I.D., Roth, S., Schindler, K.: MOT16: a benchmark for multi-object tracking. arXiv abs/1603.00831 (2016)

    Google Scholar 

  65. Mukhoti, J., Gal, Y.: Evaluating Bayesian deep learning methods for semantic segmentation. arXiv abs/1811.12709 (2018)

    Google Scholar 

  66. Munawar, A., Vinayavekhin, P., De Magistris, G.: Limiting the reconstruction capability of generative neural network using negative learning. In: Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Tokyo, Japan, pp. 1–6 (2017)

    Google Scholar 

  67. Naaz, E., Kumar, T.: Enhanced content based image retrieval using machine learning techniques. In: 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), pp. 1–12 (2017)

    Google Scholar 

  68. Neuhold, G., Ollmann, T., Bulò, S.R., Kontschieder, P.: The mapillary vistas dataset for semantic understanding of street scenes, pp. 5000–5009 (2017)

    Google Scholar 

  69. Oberdiek, P., Rottmann, M., Fink, G.A.: Detection and retrieval of out-of-distribution objects in semantic segmentation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1331–1340 (2020)

    Google Scholar 

  70. Pearson F.R.S., K.: LIII. On lines and planes of closest fit to systems of points in space. Philos. Mag. Lett. 2(11), 559–572 (1901)

    Google Scholar 

  71. Pinggera, P., Ramos, S., Gehrig, S., Franke, U., Rother, C., Mester, R.: Lost and found: detecting small road hazards for self-driving vehicles. In: International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, pp. 1099–1106 (2016)

    Google Scholar 

  72. Porzi, L., Hofinger, M., Ruiz, I., Serrat, J., Bulo, S.R., Kontschieder, P.: Learning multi-object tracking and segmentation from automatic annotations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  73. Rottmann, M., Colling, P., Hack, T.P., Hüger, F., Schlicht, P., Gottschalk, H.: Prediction error meta classification in semantic segmentation: detection via aggregated dispersion measures of softmax probabilities. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2020)

    Google Scholar 

  74. Rottmann, M., Schubert, M.: Uncertainty measures and prediction quality rating for the semantic segmentation of nested multi resolution street scene images. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1361–1369 (2019)

    Google Scholar 

  75. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.C.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1349–1380 (2000)

    Article  Google Scholar 

  76. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  77. Uhlemeyer, S., Rottmann, M., Gottschalk, H.: Towards unsupervised open world semantic segmentation (2022)

    Google Scholar 

  78. Wang, J., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43, 3349–3364 (2021)

    Article  Google Scholar 

  79. Xia, Y., Zhang, Y., Liu, F., Shen, W., Yuille, A.L.: Synthesize then compare: detecting failures and anomalies for semantic segmentation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 145–161. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_9

    Chapter  Google Scholar 

  80. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  81. Yang, L., Fan, Y., Xu, N.: Video instance segmentation, pp. 5187–5196 (2019)

    Google Scholar 

  82. Yu, F., et al.: BDD100K: a diverse driving dataset for heterogeneous multitask learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2633–2642 (2020)

    Google Scholar 

  83. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)

  84. Zhu, Y., et al.: Improving semantic segmentation via video propagation and label relaxation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8848–8857 (2019)

    Google Scholar 

Download references

Acknowledgements

We thank Sidney Pacanowski for the labeling effort, Dariyoush Shiri for support in coding, Daniel Siemssen for support in the generation of CARLA data and Matthias Rottmann for interesting discussions. This work has been funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) via the research consortia Safe AI for Automated Driving (grant no. 19A19005R), AI Delta Learning (grant no. 19A19013Q), AI Data Tooling (grant no. 19A20001O) and the Ministry of Culture and Science of the German state of North Rhine-Westphalia as part of the KI-Starter research funding program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kira Maag .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 8767 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maag, K., Chan, R., Uhlemeyer, S., Kowol, K., Gottschalk, H. (2023). Two Video Data Sets for Tracking and Retrieval of Out of Distribution Objects. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13845. Springer, Cham. https://doi.org/10.1007/978-3-031-26348-4_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26348-4_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26347-7

  • Online ISBN: 978-3-031-26348-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics