Abstract
In 2012, three new optical flow reference datasets have been published, two of them containing ground truth [1,2,3]. None of them contains ground truth for real-world, large-scale outdoor scenes with dynamically and independently moving objects. The reason is that no measurement devices exists to record such data with sufficiently high accuracy. Yet, ground truth is needed to assess the safety of e.g. driver assistance systems. To close this gap, based on existing, accurate ground truth, we analyse the performance of uninformed human motion annotators. Feature annotation bias and non-rigid motions are a major concern, limiting our results to pixel-accuracy. Our approach is the only way to create ground truth for dynamic outdoor sequences and feasible whenever pixel-accuracy is sufficient for performance analysis and piecewise rigid motions dominate the scene. Finally, we show that our approach is highly effective with respect to annotation cost per frame compared to our baseline method [4].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: Computer Vision and Pattern Recognition (CVPR), Providence, USA (June 2012)
Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012)
Meister, S., Jähne, B., Kondermann, D.: Outdoor stereo camera system for the generation of real-world benchmark data sets. Optical Engineering 51 (2012)
Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8 (2008)
Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 2432–2439. IEEE (2010)
Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1), 1–31 (2011)
Meister, S., Kondermann, D.: Real versus realistically rendered scenes for optical flow evaluation. In: Proceedings of 14th ITG Conference on Electronic Media Technology, Informatik Centrum Dortmund e.V. (2011)
McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow (2001), http://of-eval.sourceforge.net/
Mac Aodha, O., Brostow, G.J., Pollefeys, M.: Segmenting video into classes of algorithm-suitability. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pp. 1054–1061 (2010)
Vaudrey, T., Rabe, C., Klette, R., Milburn, J.: Differences between stereo and motion behaviour on synthetic and real-world stereo sequences. In: Proc. of 23rd International on Conference Image and Vision Computing New Zealand (IVCNZ 2008), pp. 1–6 (2008)
Spiro, I., Taylor, G., Williams, G., Bregler, C.: Hands by hand: Crowd-sourced motion tracking for gesture annotation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 17–24. IEEE (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Donath, A., Kondermann, D. (2013). Is Crowdsourcing for Optical Flow Ground Truth Generation Feasible?. In: Chen, M., Leibe, B., Neumann, B. (eds) Computer Vision Systems. ICVS 2013. Lecture Notes in Computer Science, vol 7963. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39402-7_20
Download citation
DOI: https://doi.org/10.1007/978-3-642-39402-7_20
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39401-0
Online ISBN: 978-3-642-39402-7
eBook Packages: Computer ScienceComputer Science (R0)