Skip to main content

Is Crowdsourcing for Optical Flow Ground Truth Generation Feasible?

  • Conference paper
Computer Vision Systems (ICVS 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7963))

Included in the following conference series:

Abstract

In 2012, three new optical flow reference datasets have been published, two of them containing ground truth [1,2,3]. None of them contains ground truth for real-world, large-scale outdoor scenes with dynamically and independently moving objects. The reason is that no measurement devices exists to record such data with sufficiently high accuracy. Yet, ground truth is needed to assess the safety of e.g. driver assistance systems. To close this gap, based on existing, accurate ground truth, we analyse the performance of uninformed human motion annotators. Feature annotation bias and non-rigid motions are a major concern, limiting our results to pixel-accuracy. Our approach is the only way to create ground truth for dynamic outdoor sequences and feasible whenever pixel-accuracy is sufficient for performance analysis and piecewise rigid motions dominate the scene. Finally, we show that our approach is highly effective with respect to annotation cost per frame compared to our baseline method [4].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: Computer Vision and Pattern Recognition (CVPR), Providence, USA (June 2012)

    Google Scholar 

  2. Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  3. Meister, S., Jähne, B., Kondermann, D.: Outdoor stereo camera system for the generation of real-world benchmark data sets. Optical Engineering 51 (2012)

    Google Scholar 

  4. Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8 (2008)

    Google Scholar 

  5. Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 2432–2439. IEEE (2010)

    Google Scholar 

  6. Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1), 1–31 (2011)

    Article  Google Scholar 

  7. Meister, S., Kondermann, D.: Real versus realistically rendered scenes for optical flow evaluation. In: Proceedings of 14th ITG Conference on Electronic Media Technology, Informatik Centrum Dortmund e.V. (2011)

    Google Scholar 

  8. McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow (2001), http://of-eval.sourceforge.net/

  9. Mac Aodha, O., Brostow, G.J., Pollefeys, M.: Segmenting video into classes of algorithm-suitability. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 1054–1061 (2010)

    Google Scholar 

  10. Vaudrey, T., Rabe, C., Klette, R., Milburn, J.: Differences between stereo and motion behaviour on synthetic and real-world stereo sequences. In: Proc. of 23rd International on Conference Image and Vision Computing New Zealand (IVCNZ 2008), pp. 1–6 (2008)

    Google Scholar 

  11. Spiro, I., Taylor, G., Williams, G., Bregler, C.: Hands by hand: Crowd-sourced motion tracking for gesture annotation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 17–24. IEEE (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Donath, A., Kondermann, D. (2013). Is Crowdsourcing for Optical Flow Ground Truth Generation Feasible?. In: Chen, M., Leibe, B., Neumann, B. (eds) Computer Vision Systems. ICVS 2013. Lecture Notes in Computer Science, vol 7963. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39402-7_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39402-7_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39401-0

  • Online ISBN: 978-3-642-39402-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics