Skip to main content
Log in

Synthetic sequences and ground-truth flow field generation for algorithm validation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. www.autodesk.com/maya

References

  1. http://vision.middlebury.edu/flow/

  2. http://sintel.is.tue.mpg.de/results/

  3. http://www.cvlibs.net/datasets/kitti/

  4. http://www.cvc.uab.es/adas

  5. Adato Y, Zickler T, Ben-Shahar O (2011) A polar representation of motion and implications for optical flow. In: IEEE conference on computer vision and pattern recognition. Colorado Springs, CO, pp 1145–1152

  6. Baker S, Scharstein D, Lewis JP, Roth S, Black MJ, Szeliski R (2011) A database and evaluation methodology for optical flow. Int J Comput Vis 92(1):1–31

    Article  Google Scholar 

  7. Barron JL, Fleet DJ, Beauchemin SS (1994) Performance of optical flow techniques. Int J Comput Vis 12(1):43–77

    Article  Google Scholar 

  8. Butler DJ, Wulff J, Stanley GB, Black MJ (2012) A naturalistic open source movie for optical flow evaluation. In: European Conference on Computer Vision (6), vol 7577. Florence, pp 611–625

  9. Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Computer Vision and Pattern Recognition (CVPR). Providence

  10. Horn BKP, Schunk BG (1981) Determining optical flow. Artif Intell 17:185–203

    Article  Google Scholar 

  11. Liu C, Freeman WT, Adelson EH, Weiss Y (2008) Human-assisted motion annotation. In: IEEE conference on computer vision and pattern recognition. Anchorage

  12. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision (DARPA). In: DARPA image understanding workshop, pp 121–130

  13. Mac Aodha O, Humayun A, Pollefeys M, Brostow GJ (2013) Learning a confidence measure for optical flow. IEEE Trans Pattern Anal Mach Intell 35(5):1107–1120

    Article  Google Scholar 

  14. McCane B, Novins K, Crannitch D, Galvin B (2001) On benchmarking optical flow. Comput Vis Image Underst 84(1):126–143

    Article  MATH  Google Scholar 

  15. Meister S, Kondermann D (2011) Real versus realistically rendered scenes for optical flow evaluation. In: 14th ITG Conference on Electronic Media Technology (CEMT). Dortmund, pp 1–6

  16. Otte M, Nagel HH (1994) Optical flow estimation: advances and comparisons. In: European conference on computer vision (1), Lecture notes in computer science, vol 800. Stockholm, pp 51–60

  17. Sun D, Roth S, Black MJ (2010) Secrets of optical flow estimation and their principles. In: IEEE conference on computer vision and pattern recognition. San Francisco, pp 2432–2439

  18. Vaudrey T, Rabe C, Klette R, Milburn J (2008) Differences between stereo and motion behaviour on synthetic and real-world stereo sequences. In: Image and vision computing new zealand. Christchurch, pp 1–6

  19. Wedel A, Pock T, Zach C, Cremers D, Bischof H (2008) An improved algorithm for TV-L1 optical flow. In: Dagstuhl motion workshop. Dagstuhl Castle, pp 23–45

  20. Xu L, Jia J, Matsushita Y (2012) Motion detail preserving optical flow estimation. IEEE Trans Pattern Anal Mach Intell 34(9):1744–1757

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Spanish Government under Project TIN2011-25606. The work of N. Onkarappa was supported in part by the Catalan Government through the Agency for Management of University and Research Grants (AGAUR) under an FI Grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Naveen Onkarappa.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Onkarappa, N., Sappa, A.D. Synthetic sequences and ground-truth flow field generation for algorithm validation. Multimed Tools Appl 74, 3121–3135 (2015). https://doi.org/10.1007/s11042-013-1771-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-013-1771-7

Keywords

Navigation